Yazılar

Verificación en dos pasos: cómo activarla en WhatsApp, Instagram y TikTok para proteger tus cuentas

Las redes sociales se han convertido en uno de los principales objetivos de los ciberdelincuentes. Cada día se reportan miles de casos de cuentas hackeadas en Instagram, Facebook o TikTok. Una de las formas más eficaces de evitarlo es activar la verificación en dos pasos o autenticación en dos factores (2FA), un sistema que añade una capa extra de seguridad.

Con este método, para acceder a una cuenta no basta con la contraseña: también es necesario un segundo paso de identificación, como recibir un código en el móvil o usar una app de autenticación. Así, incluso si alguien descubre tu contraseña, no podrá iniciar sesión sin ese segundo factor.

Cada plataforma ofrece opciones diferentes. En WhatsApp, la función se activa en Ajustes → Cuenta → Verificación en dos pasos. Solo hay que crear un PIN de seis dígitos y, opcionalmente, añadir un correo electrónico para recuperarlo en caso de olvido.

En Instagram, la opción está en Centro de cuentas → Contraseña y seguridad → Autenticación en dos factores, donde se puede elegir recibir el código por SMS, WhatsApp o una app de autenticación. En TikTok, se accede desde Ajustes y privacidad → Seguridad → Verificación en dos pasos, con tres métodos posibles: SMS, correo electrónico o aplicación.

Los expertos recomiendan usar siempre una app de autenticación (como Google Authenticator, Authy o Microsoft Authenticator) en lugar de SMS, ya que los mensajes pueden ser interceptados mediante ataques de duplicado de SIM.

Si pierdes el móvil, podrás recuperar tus cuentas si tenías configurados métodos de respaldo, como correos alternativos o códigos de recuperación guardados. Si no los tienes, deberás contactar con el servicio técnico y verificar tu identidad.

Además, conviene revisar las sesiones activas en tus redes, cerrar las desconocidas y mantener las contraseñas actualizadas con la ayuda de un gestor de contraseñas seguro.

Report Claims Meta Earned $16 Billion in 2024 from Fraudulent Ads on Facebook and Instagram

Meta Reportedly Made Billions from Fraudulent Ads Across Facebook and Instagram in 2024

A new report has alleged that Meta Platforms — the parent company of Facebook, Instagram, and WhatsApp — earned a significant portion of its 2024 revenue from fraudulent and prohibited advertisements. According to internal projections, about 10.1 percent of Meta’s total revenue for the year reportedly came from ads linked to scams and banned goods. The findings suggest that certain internal practices and oversight failures allowed these fraudulent ads to remain active on its platforms, despite clear violations of company policy and advertising regulations.

Citing internal company documents, Reuters reported that Meta failed to effectively detect or block deceptive advertising for a range of illegal or misleading products and services. These included fake e-commerce listings, fraudulent investment schemes, unlicensed online casinos, and even banned medical products. The issue reportedly persisted for at least three years across Meta’s major apps — Facebook, Instagram, and WhatsApp — raising concerns about the company’s ad moderation and accountability practices.

The internal projections also claimed that around $16 billion (approximately ₹1.41 lakh crore) of Meta’s total 2024 revenue stemmed from these fraudulent ad sources. The report further alleged that Meta was hesitant to remove or suspend accounts, even those identified internally as “the scammiest scammers.” Executives reportedly feared that taking strict action against these advertisers would lead to a noticeable decline in ad revenue, which could in turn impact the company’s heavy investments in artificial intelligence (AI) development and infrastructure.

These revelations have sparked fresh debate about Meta’s commitment to user safety and transparency in digital advertising. Critics argue that prioritizing profits over consumer protection undermines trust in its platforms, especially as users increasingly encounter scams disguised as legitimate promotions. While Meta has yet to issue a detailed public response to these allegations, the report adds pressure on the company to tighten its ad screening processes and demonstrate stronger ethical oversight in its rapidly expanding AI-driven advertising ecosystem.

Motion Picture Association Orders Meta to Drop “PG-13” Label from Instagram Teen Filters

The Motion Picture Association (MPA) has issued a cease-and-desist letter to Meta, accusing the social media giant of misleadingly using the film industry’s “PG-13” rating in its new content filters for teen users on Instagram. The group said Meta’s claim that its filters are modeled on the movie rating system is “literally false and highly misleading.”

Meta announced last month that it would restrict what users under 18 see on Instagram by applying filters “inspired by the PG-13 rating system.” The MPA, however, says the comparison is inappropriate, emphasizing that its rating process involves a curated, consensus-driven assessment by human reviewers — not automated algorithms.

In an October 28 letter to Meta Chief Legal Officer Jennifer Newstead, the MPA demanded that the company immediately stop using the “PG-13” mark and disassociate its Teen Accounts and AI moderation tools from the film rating system, warning that unauthorized use could undermine public trust in movie ratings. The association asked Meta to resolve the issue by November 3.

A Meta spokesperson said the company had no intention of implying a partnership with the MPA and hopes to “work constructively” with the association to address concerns. Meta said the filter initiative was designed to give parents greater control over what teenagers see on its platforms.

The dispute comes as Meta faces growing scrutiny from regulators and advocacy groups over the safety of its younger users. The company has also faced lawsuits alleging that its social platforms expose minors to harmful content.