A Practical Framework for Creative Agencies Navigating Generative AI
Creative agencies have always been early adopters of new tools. From digital production to social platforms to automation, the industry has repeatedly absorbed emerging technology and translated it into cultural and commercial value. Generative AI is the latest chapter in that history, but it introduces a different kind of challenge. The issue is no longer just what can be created, but how responsibility, authorship, and trust are preserved when human creativity and machine generation overlap.
The Higgins-Berger Scale addresses that gap. It is not a moral verdict on AI, nor a set of prohibitions designed to slow innovation. It is a practical framework for evaluating how generative AI is actually used inside creative, informational, and commercial work, and for making those choices visible, defensible, and intentional.
Rather than treating ethics as a philosophical abstraction, the scale treats it as a design constraint. Something to be considered early, discussed openly, and revisited as tools and expectations evolve.
Ethics Grounded in Practice, Not Theory
Much of the discourse around AI ethics swings between two extremes. On one end are abstract principles that are difficult to apply under real-world constraints. On the other are rigid rules that ignore creative nuance and context. Neither reflects how agencies operate day to day.
The Higgins-Berger Scale was designed for practical use within creative workflows. It evaluates outcomes and processes, not intentions or marketing language. The question it asks is straightforward. Given how AI is being used in this specific project, what ethical risks are being introduced, and how are they being managed?
To answer that, the scale focuses on five areas where generative AI most often alters the ethical landscape: transparency, potential for harm, data usage and privacy, displacement impact, and intent. Each category is scored based on observed behavior and documented practice. Lower scores indicate stronger ethical alignment, while higher scores signal the need for mitigation, redesign, or in some cases, restraint.
Importantly, the scale does not demand perfection. It recognizes that ethical reasoning involves tradeoffs and judgment. Ambiguity is not a failure of the framework. It is an unavoidable feature of responsible decision making.
Transparency as Accuracy, Not Performance
Transparency is frequently misunderstood as a requirement for constant disclosure. In creative practice, that approach is neither realistic nor necessary. The scale defines transparency more narrowly and more usefully.
The ethical obligation is not to list every tool involved in production, but to avoid misrepresentation. Claiming purely human authorship when generative AI played a meaningful role undermines trust, particularly in contexts where audiences reasonably expect craftsmanship, originality, or accountability.
As AI becomes a standard component of creative toolkits, many audiences already assume some level of machine assistance. Transparency becomes most critical when omission would mislead, such as in journalism, education, political messaging, or explicitly handcrafted work. In these cases, clarity is not performative. It is corrective.
Assessing Harm Through Context
Generative AI introduces new vectors for harm, but harm rarely arises from content alone. It emerges from context, distribution, and interpretation. The Higgins-Berger Scale evaluates whether AI output could reasonably mislead, reinforce bias, damage reputations, or create unintended negative consequences once released into the world.
The objective is not to eliminate risk entirely, which is rarely possible, but to anticipate and mitigate foreseeable issues. Lower scores reflect projects where risks have been examined, safeguards applied, and human review meaningfully integrated. Higher scores indicate unexamined assumptions or indifference to how content may be received or misused.
For creative agencies, this mirrors existing responsibilities around messaging, representation, and audience impact. The scale simply ensures those considerations are not bypassed because a machine was involved.
Data Responsibility Does Not Disappear
While individual creators and agencies may not control how large models are trained, they remain responsible for how data is selected, supplied, and used within their own workflows. Feeding sensitive information into opaque systems, relying on questionable datasets, or ignoring licensing and consent introduces ethical and legal risk regardless of intent.
The scale treats uncertainty around data provenance as a signal, not an excuse. When origins are unclear, caution is warranted. Ethical practice favors documented adherence to licensing standards, privacy laws, and data minimization, along with thoughtful selection of tools and vendors.
Convenience does not negate responsibility.
Augmentation Over Erasure
Technological change has always reshaped creative labor. Displacement alone is not inherently unethical. Ethical risk increases when automation replaces human contribution without consideration for impact, transition, or value creation.
The Higgins-Berger Scale distinguishes between AI used to augment human work and AI used to quietly substitute for it. Projects that integrate AI as a collaborative tool, support reskilling, or enable new creative roles consistently score lower than those that remove human judgment while preserving the appearance of human authorship.
For agencies, this distinction is critical. Long-term trust is built not only on what is produced, but on how creative responsibility is distributed and retained.
Intent as the Unifying Factor
Across all five categories, intent acts as the connective tissue. The scale differentiates between uses designed to create, inform, or improve access, and those driven primarily by deception, exploitation, or the desire to obscure accountability.
Commercial objectives are not inherently unethical. Risk escalates when efficiency, novelty, or engagement is prioritized at the expense of transparency, consent, or harm mitigation. Ethical failures most often arise not from malice, but from disengagement and the quiet removal of human responsibility.
Reading the Zones
Final scores place projects into five ethical zones, ranging from ethically exemplary to unethical or illegal. These zones are not judgments of creative quality or innovation. They are indicators of ethical risk and oversight.
Lower zones reflect responsible, well-governed use of generative AI. Higher zones signal areas where redesign, additional safeguards, or outright avoidance are necessary before work is released or scaled. A low score does not confer moral permission, and a high score does not imply malicious intent. The value of the scale lies in prompting earlier, clearer conversations before harm occurs.
Ethics as a Creative Discipline
The most powerful mitigating factor in the Higgins-Berger Scale is meaningful human involvement. Human review, curation, and improvement are not formalities. They are the mechanism by which responsibility remains anchored to people rather than systems.
Ethical use of generative AI does not require abstinence or perfection. It requires intention, awareness, and accountability. Most failures occur when oversight is reduced to a checkbox or removed entirely in pursuit of speed or scale.
Used thoughtfully, the Higgins-Berger Ethics Scale provides creative agencies with a shared language for navigating generative AI without sacrificing trust, authorship, or judgment. It frames ethics not as a barrier to innovation, but as a discipline that strengthens it.
Generative AI will continue to evolve. So must the norms that guide its use. The scale is designed to be revisited, challenged, and refined over time. Its purpose is not to produce a score, but to ensure that human responsibility remains visible wherever machines are invited into the creative process.