Treating AI Creative As A Long-Term, Data-Governed Process

Treating AI Creative As A Long-Term, Data-Governed Process

As local brands race to embed AI into their marketing, compliance and reputational risk are quickly outpacing internal controls. And, while SA has no standalone law just yet, global regulations, such as the EU’s AI Act, will apply the moment South African campaigns touch overseas audiences.

‘Leaders can’t be complacent, while South Africa currently has no standalone AI law, we do have an AI policy process, including the 2024 National AI Policy Framework and, as of April 2026, a draft national AI policy approved for public comment. This explicitly flags bias and discrimination as key risks along with disinformation and fake news generated through generative AI. Even without an enacted AI law, we have all the signals we need to know where we are heading, and our CMOs must get their houses in order’, said Marisa Swanepoel, Global Creative Operations Director at Incubeta.

SA Likely To Take EU Lead

Europe operates under the EU AI Act, which came into force in August 2024, with most provisioning fully applicable by August this year. It imposes strict rules on high-risk AI like real-time biometric profiling for targeted ads and requires transparency for AI-generated content. EU regulators are serious about protecting consumers and the act has made provision for fines up to €35 million or 7% of global turnover. Because it applies to any content reaching EU audiences, it is directly relevant to global brands operating from South Africa as well.

However, Swanepoel pointed out that our existing regulators aren’t altogether toothless when it comes to AI misuse, saying South Africa already has legal pathways to pursue many AI-related abuses, even without a dedicated AI statute. This can be done through existing laws on identity rights, defamation, POPIA (for misuse of personal data in training deepfakes), and the Cybercrimes Act.

Risk Is Real But Local Brands Are Struggling

Swanepoel said working with both local and international clients to solve compliance challenges has highlighted how global organisations are treating AI bias as a hard compliance and brand‑risk issue rather than a soft ethics concern. This is in contrast to local companies, which she said often struggle with the fact that AI tools have moved faster than governance, both internally and externally.

‘We’ve seen what happens when that grey space is exploited. South African audiences have already been exposed to deepfake ads featuring fabricated endorsements from well-known figures, with documented cases of deepfake ads aimed to mislead,’ said Swanepoel, referring to the recent case of local celeb Katlego Maboe. ‘It’s no longer a hypothetical risk, and it’s caused real harm to local personalities. It is our responsibility as marketers to ask the right questions and to evaluate how we approach creativity and AI in an ethical manner. Not just in South Africa, but globally,’ she said.

Treating AI Creative As A long-term, Data-Governed Process

Swanepoel said in order to mitigate AI governance risk, businesses need clear accountability, refreshed rights and data controls, and the discipline not to cut corners just because AI is fast.

Every AI project should have a named owner for permissions and legal checks. Contracts with models, influencers, photographers and agencies must explicitly cover AI and synthetic use, and brands should avoid using likeness or deep fakes without informed consent.

Sensitive data must be handled under strict sovereignty and security rules, with full visibility into where it is stored and which tools process it. Above all, companies should build bias, licensing and sign‑off checks into everyday workflows and keep policies updated as local and global regulations evolve.

‘By treating AI creative as a governed, data-informed process, we deliver work that relies on genuine insight rather than tactical AI guesswork. This requires a greater upfront investment as well as good problem statements and data points, but your outputs are defensible, ethical, and superior in performance. What’s more, with the right data foundation, a governed approach scales exponentially over time, both in speed and efficiency, through a compounding effect,’ she said.

Looking ahead, Swanepoel said brands can’t get away from the fact that AI is already deeply embedded in how local brands create, target and optimise their marketing.

‘AI has moved on from an abstract, ethical discussion. CMOs must be able to demonstrate that their use of the technology is lawful, careful and accountable. Putting in the work now to ensure you are covered by existing global regulations and will reduce legal, operational and reputational risk as South Africa’s draft AI policy moves toward a more formal governance framework over the next few years,’ Swanepoel advised.

INCUBETA
www.incubeta.com