Kochava Faces Legal Battle with FTC Over Data Practices

Kochava, a prominent player in mobile app data analytics, is currently embroiled in a legal dispute with the Federal Trade Commission (FTC) over its data acquisition and AI-aided analytics practices. The outcome of this case could have significant implications for the global data marketplace and Congress’ approach to artificial intelligence (AI) and data privacy.

The FTC’s recently unsealed amended complaint against Kochava supports the company’s claims of being able to provide data for “Any Channel, Any Device, Any Audience” and allowing buyers to “Measure Everything with Kochava.” The FTC also announced a settlement with data broker Outlogic, which marks the “first-ever ban on the use and sale of sensitive location data.” Outlogic is required to destroy the location data it possesses and is prohibited from collecting or using such information related to sensitive locations.

According to the FTC and proposed class-action lawsuits against Kochava, the company secretly collects vast amounts of consumer location and personal data without notice or consent. It then utilizes AI to analyze this data, enabling it to predict and influence consumer behavior in various ways. Kochava denies these allegations.

The FTC asserts that Kochava sells a comprehensive view of individuals, connecting precise geolocation data with email, demographics, devices, households, and channels. This detailed data includes information about visits to hospitals, reproductive health clinics, places of worship, homeless and domestic violence shelters, and addiction recovery facilities. The FTC argues that by selling such specific data, Kochava exposes individuals to potential threats of stigma, stalking, discrimination, job loss, and physical violence.

The litigation documents reveal that Kochava collects and sells its “Kochava Collective” data, which encompasses precise geolocation data, comprehensive consumer profiles, mobile app usage details, and audience segments. These audience segments can be based on behaviors and sensitive information such as gender identity, political and religious affiliation, race, medical information, and more. Kochava customers can target extremely specific groups by selecting certain audience segments, including individuals who identify as “other” or pregnant African American Muslim females.

The FTC claims that Kochava acquires consumer data through its software development kits installed in over 10,000 apps globally, as well as directly from other data brokers. Another lawsuit in California alleges similar charges of surreptitious data collection and analysis by Kochava.

The FTC’s complaint highlights how advancing AI tools enable invasive data analysis, including the inference and disclosure of sensitive information such as medical records and images. AI also facilitates the manipulation of individual and group behavior, potentially influencing decisions without individuals’ knowledge.

The ongoing legal battle between Kochava and the FTC is progressing with discovery, and a trial date is expected in 2025. The outcome of this case, combined with proposed legislation and the FTC’s focus on generative AI, data, and privacy, could lead to significant changes in data acquisition practices, AI analytics, and the lawful use of data in both machine- and human-based analysis.

More Posts

ANI Pharmaceuticals, a diversified biopharmaceutical company, has announced the launch of Kionex® (Sodium Polystyrene Sulfonate …

Expro, a leading provider of energy services, has reached a significant milestone by successfully completing …

The Montgomery County Office of Public Health has released the food safety inspection reports for …