Artificial Intelligence and Data Privacy

Artificial Intelligence (AI) has become a transformative force in the digital economy, offering efficiencies and innovative applications across industries. However, AI’s reliance on vast amounts of data raises significant privacy concerns, challenging existing legal frameworks. In Kenya, data privacy is constitutionally anchored in Article 31 of the Constitution, 2010, and operationalized through the Data Protection Act, 2019 (the Act). This commentary examines the implications of AI on data privacy and proposes measures to mitigate associated risks.

Legal Framework Governing AI and Data Privacy

The Act provides the primary legal safeguards for personal data processing, with Section 37 prohibiting commercial use of personal data unless consent is obtained, the data is anonymized, or the use is authorized by law. AI systems typically engage in large-scale data collection and analysis, necessitating strict compliance with these provisions. However, Kenya’s existing regulatory framework lacks specificity in addressing AI-related privacy risks, necessitating targeted interventions.

Emerging Privacy Concerns in AI Development

The proliferation of Big Data and AI-driven analytics has intensified data collection practices. Companies leveraging AI may bypass data protection principles under Section 25 of the Act, resulting in unchecked data aggregation. Key concerns include:

     a. Transparency and Data Acquisition

AI companies often lack clarity on how training datasets are obtained, leading to privacy risks.

     b. Predictive and Generative AI Risks

AI models such as Large Language Models (LLMs) process personal data to generate outputs, potentially exposing sensitive information.

     c. Exploitation and Misuse of Data

Unauthorized data use may facilitate identity theft, fraud, and reputational harm.

     d. Bias and Discriminatory Outcomes

AI-driven decisions in employment, healthcare, and financial services can amplify social biases, disproportionately affecting marginalized groups.

Recent lawsuits, including cases against OpenAI and Microsoft for unauthorized use of copyrighted content, and the USD 4.3 million lawsuit against Vodacom Tanzania for privacy violations, underscore the global and regional urgency of these concerns.

Addressing AI Privacy Risks: Legal and Regulatory Interventions

Given AI’s rapid evolution, Kenya must strengthen its regulatory landscape to ensure responsible data use. Key recommendations include:

     1. Regulation of Data Intermediaries

Establishing legally recognized data intermediaries (controllers and processors) to oversee AI-related data transactions and safeguard data subjects’ interests.

     2. Enactment of the Kenya Robotics and Artificial Intelligence Bill, 2023

Providing a dedicated legal framework to address AI governance, transparency, and accountability.

     3. Artificial Intelligence Code of Practice

Introducing enforceable standards requiring AI developers to disclose data sources, implement risk mitigation measures, and uphold privacy-by-design principles.

     4. Supply-Chain Approach to Data Privacy

Ensuring data protection throughout the AI lifecycle by embedding accountability mechanisms from data collection to model deployment.

CONCLUSION

AI’s intersection with data privacy presents complex legal challenges, necessitating proactive regulatory reforms. While Kenya’s Data Protection Act lays a foundation for safeguarding personal data, additional legislative measures are required to address AI-specific risks. A comprehensive legal framework, coupled with ethical AI practices, will ensure that innovation aligns with fundamental privacy rights, fostering a trustworthy and legally compliant AI ecosystem.

Adapted from: Jacob Ochieng