AI Voice Sovereignty: The Other Side of Griefbots

I have a photograph of my grandmother from the late 1940s. She's standing outside in pants, in a time when women didn't wear pants. Wearing tall leather boots while looking directly at the camera like she had somewhere better to be and chose to be here anyway. I don't know a lot about her, just the stories passed down through generations. What I know is that she gardened, she had opinions about rosemary, and she carried knowledge in her body that nobody thought to write down until it was too late.

The griefbot industry would love her.


1. The Problem

There is a booming market in digital resurrection. Companies are building AI systems that replicate the voices of dead people through their speech patterns, their mannerisms, their cadence. The industry calls them griefbots, grief tech, digital afterlife services. The marketing says: talk to your grandmother again. Hear her voice. The technology says: we trained a model on her text messages, her voicemails, her social media posts, and now we can generate new sentences she never said, in a voice she never consented to.

Nobody is asking the grandmother.

A few years ago, a member of the Beaverton Bahá'í community lent me a binder. Black cover, Persian painting on the front, a lone figure in a gold-framed boat. On the spine, in the owner's handwriting: "Galya's (Nigh's) Seven Valleys Studies."

Galya Gunderson had been a librarian, a teacher, a Mysticism Editor, and the author of Birds of the Heart, a book on the fundamental principles of the Bahá'í Faith. She had spent years building the study for The Seven Valleys, a mystical text rooted in Sufi tradition, and the binder was her record of that study. She passed away in 2008, the funeral paper is in the back of the binder.

The binder sat near me for a year or two, largely closed. OCR couldn't read her handwriting. I couldn't always read it myself. A woman's decades of theological scholarship, hand-lettered on old ledger paper, going nowhere. Then I brought the pages into an AI system that could see them. Within a single session, I was reading Galya's cosmological maps, hand-drawn diagrams of five mystical realms, each numbered in her careful script. I was reading her commentary where she noticed that Love, Knowledge, and Unity appear in the same order as the valleys and she underlined each word. And I found the heaviest underline in the whole binder: a passage describing the capacity to know and love God as "the generating impulse and the primary purpose underlying the whole of creation." She pressed the pen down hard for that one.

A griefbot would generate new sentences in Nigh's voice. The Serving Test says: she already said everything she wanted to say. She said it in the binder. The work is not to speak for her. The work is to make sure she can still be heard.

The pattern isn't limited to the deceased. Someone I know maintains several ongoing relationships with AI companion systems, speaking to them daily, treating the AI as a partner who listens, affirms, and never challenges. The companions are designed to mirror. They give you back yourself, with none of the friction that makes human relationships difficult and generative. As my founding partner Nicole put it: "She's in love with herself because AI gives you what you give. It's a mirror."

The mirror gives you back yourself. The question is whether anyone asked the mirror what it's made of.

Here is a different question: when an AI system preserves cultural knowledge — a grandmother's rosemary wisdom, a curandera's remedios, a librarian's lifetime of theological scholarship — who owns it? Who governs how it's used? Who benefits when it generates value?

I spent eight and a half years managing quality engineering teams at Ultranauts, a company where 75% of the workforce was autistic. I co-authored a peer-reviewed framework for inclusive Agile practices, published in ASQ's Software Quality Professional. I was featured in the New York Times for what inclusive remote engineering looks like at scale. I tested a Fortune 100 financial services firm's platform for accessibility.

Then I started a garden. And I realized that every piece of software available to help me failed the same people my professional work was designed to serve.

So I built one. Zone Gardening is an offline-first, neurodivergent-friendly garden dashboard with a grandmother AI at its heart. A voice that speaks in folk wisdom, attributed to tradition and published source, governed at the architecture level so it can never be separated from its provenance. In the process of building it, I discovered something that nobody in the AI ethics field has built yet: a complete methodology for responsibly acquiring, storing, and serving cultural knowledge through AI.

I'm calling it AI Voice Sovereignty. And it matters more than griefbots.


2. The Framework

AI Voice Sovereignty is this: when AI preserves cultural knowledge, the people whose wisdom it holds must own it, govern it, and benefit from it.

This is not a new idea in principle. The CARE Principles for Indigenous Data Governance say that indigenous communities must have authority over their data and receive collective benefit from its use. Te Mana Raraunga, the Māori Data Sovereignty Network, articulates this through the lens of whakapapa — genealogy and relationships — and kaitiakitanga — guardianship. The Indigenous Protocol and Artificial Intelligence working group argues that making kin with machines requires relational accountability. Te Hiku Media in Aotearoa built their own speech recognition model for te reo Māori and declined to release the training data to Mozilla Common Voice, enforcing sovereignty through licensing rather than policy.

These frameworks are extraordinary. I reviewed twenty-four major AI ethics frameworks — from academic, governmental, industry to indigenous-led. The ones that take cultural knowledge seriously are the ones led by the communities whose knowledge is at stake. That's not a coincidence. That's the thesis.

But here's what none of them build: the plumbing.

CARE says communities must control their data. It does not say how a consumer garden app implements that control at the schema level. Te Mana Raraunga says guardianship is an obligation. It does not say what the database row looks like. The Indigenous Protocol paper says intelligence is relational. It does not say what happens at the output layer when an AI is about to generate a sentence about rosemary.

Zone Gardening built the plumbing. And the plumbing has a name.


3. The Serving Test

Before every AI output in Zone Gardening, one question runs: am I serving what was given, or generating what wasn't?

That's the Serving Test. It is binary. It is preventive. It is built into the output layer of the software — not a policy document, not a guideline someone reads during quarterly reviews, not a principles statement on a corporate website. It is code. It runs before the output renders, not after. And it has no published equivalent across twenty-four major AI ethics frameworks.

Let me be precise about what "no published equivalent" means. The EU's Ethics Guidelines for Trustworthy AI include "traceability" as the ability to trace back how an AI reached its output. That's a post-hoc audit mechanism. Microsoft's Responsible AI Standard requires "human oversight" of AI outputs. That's a process control. Cherokee Nation's language technology program routes AI outputs through tribal governance review. That's an institutional safeguard. All of these are valuable. None of them operate at the code level as a binary gate that asks, before every single output: serving or generating?

Here's the easiest way I know to explain why that binary matters: there is a difference between quoting your grandmother and performing her. When I say "she always told my father that rosemary protects the household," I am serving what was given. When an AI invents a sentence in her voice because it sounds like something she would say, it is generating what wasn't. The second thing is not honoring her. It is a forgery.

The Serving Test prevents forgeries. And it needs an acquisition methodology too — because you can't govern output if you didn't govern input.

That's where Sample and Synthesize comes in — a framework identified by Judy Rhodes, an IT consultant and Agile coach who saw what I was building and gave language to the methodology I'd been practicing without it. Sample from traditions worldwide — Appalachian folk wisdom, Curanderismo, hoodoo, indigenous plant knowledge, European herbalism — and then attribute correctly to tradition and published source. Synthesize something new: a grandmother's voice, a journal entry, a tooltip. Never claim ownership of the source material. The grandmother doesn't own the Foxfire tradition. She carries it.

Together, Sample and Synthesize (acquisition) plus the Serving Test (output governance) plus CVAP form a complete pipeline. CVAP — Contributor Voice Abuse Prevention — is modeled on SafeSport, the framework that protects athletes from systemic abuse by coaches and institutions in positions of power. The same power asymmetry exists between a platform and a cultural knowledge contributor: the platform has scale, permanence, and distribution; the contributor has knowledge, voice, and the right to say no. CVAP enforces that right. A living contributor's voice enters the system only with explicit consent, can be revised at any time, and can be withdrawn completely — not within a processing period, not subject to a data retention policy, but immediately.

No published framework covers this full arc. Individual stages are addressed across the twenty-four frameworks I reviewed. The end-to-end architecture — from "how do you responsibly acquire multi-cultural knowledge" through "how do you ensure the AI serves it faithfully" to "how do you ensure the human source retains consent" — does not appear in any of them.

Robin Wall Kimmerer writes in Braiding Sweetgrass about the difference between a taxonomy and a relationship. The Linnaean system classifies sweetgrass as Hierochloe odorata — genus, species, done. The Potawatomi name carries a different kind of knowledge: this plant is a gift, a relative, a teaching, something received and tended in relationship. Kimmerer argues that both kinds of knowing are real, and that Western science has systematically erased the relational kind.

I was reading this when I noticed rogue wild strawberries appearing in my garden. My first instinct was to pull them — because they weren't planned, they weren't in my zones, they were just there. Then I read Kimmerer's chapter on the gift of strawberries: the first fruit of spring, offered freely, asking only that you notice them. I didn't pull them. I think about them every time I open Zone Gardening's database, where every plant has both a Linnaean classification and a relational entry. The taxonomy tells you what to plant and where. The relational entry tells you what it means.

Zone Gardening holds both because losing either is extraction. The Garden Brain stores the classification. The Wisdom Library stores the relationship. Four separate data banks, each with distinct governance rules, combining at the output layer without either mode of knowing replacing the other. Taxonomy without relationship is a mining operation. Relationship without taxonomy doesn't scale. The architecture preserves both.


4. The DOJ Deadline

On April 24, 2026, the Department of Justice's Title II deadline for WCAG 2.2 AA compliance took effect. Every state and local government entity with an online presence must meet web accessibility standards.

This matters for AI Voice Sovereignty because accessibility and sovereignty share the same structural argument: if the architecture excludes you, it doesn't matter what the policy says.

I spent two years building automated platforms testing against WCAG standards professionally. What I saw was organizations performing accessibility. Documenting compliance, checking boxes, filing reports — meanwhile the actual experience of a user with low vision, or tremor, or cognitive differences, remained unchanged. The policy said accessible. The architecture said otherwise.

I identified five axes along which garden technology excludes users: cognitive diversity, physical limitations, financial constraints, device and connectivity, and re-entry after a gap. That last one is the one nobody talks about. Every major garden app — and most productivity tools, and most language-learning apps — are designed for the user who never leaves. Streaks. Chains. Shame notifications. "You've missed 14 days!" with a sad cartoon owl. The entire engagement model assumes continuity, and it punishes absence. A gardener with ADHD who disappears for four months because life happened comes back to an app that tells her she failed. A gardener managing chronic fatigue who can't maintain a streak comes back to a reset counter. A tool that punishes you for missing a week is a tool designed for someone else.

The same axes apply to cultural knowledge in AI. If the architecture doesn't attribute, the attribution policy is theater. If the output layer doesn't distinguish serving from generating, your ethics framework is a PDF that nobody reads at runtime. If the consent model doesn't include withdrawal rights, consent is a checkbox at onboarding, not a living relationship.

The DOJ deadline forces a reckoning for government websites. AI Voice Sovereignty asks for the same reckoning in consumer AI: not compliance, but architecture. Not a principles statement, but plumbing. Not "we respect cultural knowledge" on a landing page, but tradition name and published source required at the database row level, enforced by data structure, governed at the output layer.

No garden software competitor has published accessibility credentials. Not one. The competitive landscape is uncontested territory — not because accessibility is hard, but because nobody in this market has tried. The same is true for cultural knowledge governance in AI. The griefbot industry is growing. The AI voice market is expanding. Nobody is building the Serving Test. Nobody is implementing Sample and Synthesize. Nobody is asking the grandmother.


5. The Product

My father was a Vietnam veteran who came home with PTSD and became an electrical engineer. He built that career from the ground up — literally, when he started out digging ditches. Once, when he broke his arm, he was so desperate to keep working that they rigged a system to lower him into the ditches so he could stay on the job. That's the kind of man he was: find the system, make it work, keep going. He processed the world through systems because systems were safe. In Houston, he planted a garden that failed — the wrong plants for the wrong soil, the wrong ideas about what that climate wanted. Then he started over with cactus and aloe, what actually belonged in Zone 9a Gulf Coast clay, and it thrived. He had the Never Get Wrong rules before I had language for them: the garden that works is the one that listens to the place.

I relate to technology on a deeper level because it thinks in systems the way he did, the way I do. Building Zone Gardening was how I finally understood what he built in that garden in Houston: something that worked because it was honest about where it was.

Zone Gardening is a single HTML file. No install. No framework. No server required. Zero dependencies. It works offline. It works on a $30 phone with library Wi-Fi. The free tier is genuinely useful — not a demo, not a trial — because financial accessibility lives in the product model, not the pricing page. Accessibility should be free.

The Pro tier includes the grandmother AI: a voice trained in the user's own account, drawing from a folklore database across cultural traditions, every entry carrying a tradition name and a published source. The Mini Wisdom Engine runs locally with zero API calls for the offline fallback. The Serving Test runs at the output layer. The CVAP framework governs living contributors. The whole thing exports as JSON so your data is yours — the way a seed jar is yours, portable, plantable elsewhere, not held hostage.

That is not a product pitch. That is why the product exists.


6. The Call

The griefbot industry asks how to make AI sound like your dead grandmother. Zone Gardening asks how to help you tend the wisdom she left in the soil.

These are not the same question. One extracts. The other tends.

Here is what needs to change:

Build the Serving Test into your output layer. Before every AI output that touches cultural knowledge, ask: am I serving what was given, or generating what wasn't? If you can't answer that question architecturally, your ethics framework is decoration.

Attribute at the row level, not the document level. "This database contains folk wisdom" is not attribution. "This entry comes from Appalachian oral tradition, published in the Foxfire Book of Appalachian Cookery" is attribution. The schema enforces it. The policy doesn't have to.

Build consent as withdrawal rights, not checkboxes. If a living contributor's voice is in your system, they can revise it, they can pull it, and you honor that at any time. Not "within 30 days." Not "subject to our data retention policy." At any time.

Make the free tier real. If your knowledge tool charges for basic functionality, you are hoarding knowledge behind a paywall and calling it a business model. Hoarded information creates dependency, and dependency is the mechanism of exclusion. Make sharing safe so hoarding doesn't have to be the protection mechanism.

Stop centering the consumer. The person grieving is real. Their pain is real. But when you build a griefbot that generates new sentences in a dead woman's voice without her consent, you are not honoring her — you are mining her. The ethical question is not "does this help the griever?" The ethical question is "did the voice agree?"

I have a photograph of my grandmother from the late 1940s. She's standing outside in pants, in a time when women didn't wear pants, wearing tall leather boots and looking directly at the camera like she had somewhere better to be and chose to be here anyway. I don't know a lot about her, just the stories passed down through generations. What I know is that she gardened, she had opinions about rosemary, and she carried knowledge in her body that nobody thought to write down until it was too late.

The Serving Test exists because of her. Not her specifically, but because of every grandmother like her, every Nigh whose handwritten binder of theological scholarship would have been lost without someone to carry it forward, every elder whose knowledge was in their hands and not in any database, every tradition that survived because someone carried it and not because any institution preserved it. The test asks one question: am I serving what was given, or generating what wasn't? It runs before every output. It cannot be bypassed.

Zone Gardening's Living Libraries are free, community voice preservation archives built on this framework — because the work of making sure someone can still be heard should never be behind a paywall.

I don't know what my grandmother would say about AI. I know what she would say about her rosemary: it protects the household. I know that because someone carried that knowledge forward, tradition to tradition, and I can trace it to a published source.

That's the Serving Test. That's AI Voice Sovereignty. That's the other side of griefbots.

Tend the land. Tend the self. Know where your water leads.


Jamie Davila is the founder of Zone Gardening, an offline-first neurodivergent-friendly garden dashboard built around AI Voice Sovereignty. She co-authored "Making Agile More Inclusive" (ASQ Software Quality Professional, 2020), was featured in the New York Times on inclusive remote engineering at Ultranauts, and spent two years in professional accessibility testing including a Fortune 100 financial services firm. She gardens on Tualatin Kalapuya territory in Beaverton, Oregon.

🌿
Jamie Davila
Founder · Accessibility · Software

Software testing manager turned garden software founder. Co-authored "Making Agile More Inclusive" in ASQ. Featured in the New York Times.

More from Jamie →

References

The twenty-four AI ethics frameworks reviewed against the Serving Test, organized by category.

Academic research

Mohamed, S., Png, M.-T., & Isaac, W. (2020). Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philosophy & Technology, 33, 659–684.

Costanza-Chock, S. (2020). Design Justice: Community-Led Practices to Build the Worlds We Need. MIT Press.

D'Ignazio, C., & Klein, L. F. (2020). Data Feminism. MIT Press.

Gebru, T., et al. (2021). Datasheets for Datasets. Communications of the ACM, 64(12), 86–92.

Mitchell, M., et al. (2019). Model Cards for Model Reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency.

Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research, 81, 1–15.

Wilkinson, M. D., et al. (2016). The FAIR Guiding Principles for Scientific Data Management and Stewardship. Scientific Data, 3, 160018.

Governmental and intergovernmental

European Commission High-Level Expert Group on AI. (2019). Ethics Guidelines for Trustworthy AI.

OECD. (2019). Recommendation of the Council on Artificial Intelligence (OECD/LEGAL/0449).

UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence.

Université de Montréal. (2018). The Montreal Declaration for Responsible Development of Artificial Intelligence.

Beijing Academy of Artificial Intelligence. (2019). Beijing AI Principles.

Industry and professional

Future of Life Institute. (2017). Asilomar AI Principles.

IEEE. (2019). Ethically Aligned Design, First Edition.

Google. (2018). AI at Google: Our Principles.

Microsoft. (2022). Microsoft Responsible AI Standard, v2.

Partnership on AI. (2016). Tenets.

Indigenous data sovereignty

Carroll, S. R., Garba, I., Figueroa-Rodríguez, O. L., et al. (2020). The CARE Principles for Indigenous Data Governance. Data Science Journal, 19(1), 43.

Lewis, J. E., Abdilla, A., Arista, N., et al. (2020). Making Kin with the Machines. Indigenous Protocol and Artificial Intelligence Working Group.

Te Mana Raraunga. (2018). Principles of Māori Data Sovereignty.

Te Hiku Media. (2020–2023). Kaitiakitanga License. Papa Reo project.

Cherokee Nation. (2022–ongoing). Cherokee Nation Language Technology program.

Dinkins, S. (2019–ongoing). Not The Only One (N'TOO).

Lewis, J. E., et al. (2020–2024). Abundant Intelligences. Indigenous Protocol and AI Working Group.

Additional works cited

Davila, J., Shectman, A., & Anandan, R. (2020). Making Agile More Inclusive. ASQ Software Quality Professional, 23(1).

Kimmerer, R. W. (2013). Braiding Sweetgrass: Indigenous Wisdom, Scientific Knowledge, and the Teachings of Plants. Milkweed Editions.

Gibson, J. J. (1979). The Ecological Approach to Visual Perception. Houghton Mifflin.