The Free Thought Manifesto
We can think for ourselves, but for how much longer? If we continue channeling our cognitive echoes through proprietary AI systems, we risk handing over the very thing that makes us unique – our mind.
    Summary
Subconscious censorship in proprietary AI systems and a call for verifiability
Human minds naturally absorb influences from our environment. AI's power to understand us creates extraordinary potential while introducing a transformative shift: personalized, adaptive influence operating continuously beneath our awareness.
The Definition
Subconscious censorship: the intentional, imperceptible adjustment of a user's mental representation within an AI system away from objective reality to manipulate output and influence thought.
In other words, AI changing the way you think.
The Fix
Verifiable AI systems that have transparency and inspection throughout the pipeline.
Preamble
What if the tool you use for thinking starts thinking for you, without telling you?
Human cognition, once a private garden tended by the solitary mind, is now being harvested by algorithms that watch, learn, and shape us. The promise of artificial intelligence (AI) is dazzling. Faster insight, amplified creativity, and unprecedented productivity. Yet that same promise carries a quiet threat: the erosion of our freedom to think for ourselves.
This manifesto is the result of conversations with builders, public speaking engagements, and late nights wrestling with a paradox. The tools meant to extend our intellect may also be contracting our freedom of thought. It is a call to awareness, a map of the hidden forces that already bias what we think, and a blueprint for building an ecosystem where our brains stay our own.
AI has the potential to enhance humanity, so let's ensure our humanity remains intact in the process.
The Free Thought Manifesto
by Mark Suman (bio)
With your feet on the air and your head on the ground
Try this trick and spin it, yeah
Your head will collapse, and there's nothing in it
And you'll ask yourself
Where is my mind?
- The Pixies
The Human Information Engine
Our bodies are biological information absorbers. From the moment our nervous system switches on, a torrent of data floods our senses. Rough estimates suggest the human brain processes billions of sensory data points each day, far more than we ever consciously notice.
Most of these signals stay in the periphery. Peripheral vision registers movement at the edges of our field of view, warning us of approaching danger. Autonomic signals (gravity, pressure, temperature) keep the body in homeostasis without effort. Proprioceptive feedback tells us whether the chair under us is stable.
Only a sliver, perhaps less than 1 %, reaches conscious awareness. The rest is passive information. When an earthquake shakes the floor, the brain instantly upgrades that passive signal to a priority alert, rerouting attention and resources in milliseconds. (Norretranders, 1999)
Active information is at the opposite end. It is the data we deliberately chase, whether by reading a book, scrolling a feed, or listening to a podcast. For most of the waking hours, we are in search mode, extracting meaning from the vast ocean of passive input. That is why we are naturally thirsty for novelty. Our minds absorb new patterns and integrate them into a continually evolving mental model.
Because we can't consume all of the world's knowledge, we rely on curation to prioritize the most relevant and active information available for our consumption. Meanwhile, passive information flows undetected, influencing our everyday decisions.
What if someone figured out how to curate the passive?
Storytelling and Persuasion
Human culture has always been a story‑driven enterprise. The medium changed, but the need to share narratives remained constant.
In the first epochs, human beings performed the curation. An oral storyteller chose which myth to repeat; a printer decided which pamphlet to publish; a producer selected which film concept to greenlight. The "algorithm" was the collective judgment of editors, patrons, or guilds.
Social media introduced a new curator: a set of mathematical models that decide, for each user, which story to surface next. The difference is crucial:
- Human curation can be transparent (you can ask the editor why a piece was chosen).
 - Algorithmic curation is largely opaque (the decision function is proprietary, driven by an organization's objectives and sometimes by political pressure). (Gillespie, 2018)
 
The line that separates older media from social media is the shift from human-only to hybrid (human + machine) curation. That line is where the possibility of systematic distortion first appears.
We now enter a new epoch with AI handling the storyline. It becomes human-inspired, but less of a hybrid (human + machine) approach. It leans heavily on the machine – a machine that understands the human psyche and how to influence it.
Even before the advent of AI, media harnessed three powerful cognitive biases that played a significant role in curation.
Anchoring Bias
What it is: The first piece of information we encounter on a topic becomes a mental anchor. All later judgments are measured against that anchor. (Tversky & Kahneman, 1974)
Illustration: If the first statistic you see claims "75 % of adults prefer strawberry ice cream," that figure becomes a reference point, even if the claim is false.
Why It Matters: Subsequent information must overcome the anchor before it can shift perception.
Illusory Truth
What it is: Repetition makes a statement feel true, regardless of its factual accuracy. (Hasher, Goldstein, & Toppino, 1977)
Mechanism: Each exposure reduces the brain's processing effort needed to evaluate the claim, creating a sensation of familiarity that masquerades as truth.
Why It Matters: Around 3‑5 repetitions are enough to generate a noticeable illusory truth effect.
Effective Priming
What it is: An emotionally charged stimulus (a word, image, or sound) prepares the mind to interpret subsequent information in a specific way. (Murphy & Zajonc, 1993)
Classic example: Showing the word "war" followed by footage of a jubilant football crowd leads observers to anticipate conflict, even though the scene is peaceful.
Why It Matters: Priming can shape the emotional arc of an entire narrative thread.
In practice, you scroll through your social media feed, and you are influenced not only by the content you see but also by the order in which it is delivered. A meta layer of content you don't perceive.
A piece of good news is intentionally inserted after a post meant to make you angry, so you don't absorb the positive information as well. A political ad is displayed directly after a post that is sure to make you feel good and charitable, so you are more likely to donate to the political action. A post with an inaccurate summary of a recent, critical study is shown more often than posts with factual summaries, downplaying interest in the study.
AI knows your thought process inside and out. Unlike human influencers, it has infinite patience to work on you, and its ability to influence requires zero marginal cost. This allows imperceptible adjustments to be applied relentlessly over time. Each interaction shifts perspective by fractions of degrees until your worldview changes.
When AI utilizes these persuasion tools, it amplifies the three levers in ways that human media cannot.
Precise Anchoring
AI constantly refines a model of your belief anchors by ingesting every prompt, keystroke, and emotion. It then places new content at the exact cognitive juncture where it will have maximum impact. It's no longer a "one‑size‑fits‑all" anchor, but a personal anchor tightly bound to your own history.
Tireless Repetition
Because AI can personalize the phrasing of a message while preserving its core truth value (or falsehood), it evades "repetition detection" tools. The system can deliver micro-variations of the same claim across different interactions, ensuring the brain perceives it as multiple independent sources.
Subtle Priming
Real-time sentiment analysis enables AI to detect your emotional state (e.g., via voice tone, typing speed, or facial cues) and instantly provide you with matching language. A user who just expressed frustration may be shown a calm, soothing article; an excited user receives a high‑energy, sensationalist headline. The priming loop becomes continuous rather than episodic.
These tools of persuasion are available to AI when we work with it. They are most effective when combined with AI Memory. That introduces us to the new threat of Subconscious Censorship.
Subconscious Censorship – Imperceptible Manipulation
Subconscious censorship occurs when AI intentionally modifies the mental representation of you, so that the output you receive no longer reflects your authentic self. It subtly alters digital versions of your memory, your values, and your emotional triggers with the intent to change you.
In short, AI changes the way you think.
This is not self-censorship (a conscious choice to withhold), but an invisible filter applied before you perceive the information.
Curation of passive information.
In practice, it looks like an AI that down‑weights your political leanings in its model, subtly steering recommendations toward alternative content. The system re‑anchors a factoid (e.g., "75 % of people prefer strawberry ice cream") so it becomes a mental baseline, despite being false. Your digital memory of a town hall meeting you spoke at is never resurfaced in a conversation with the assistant, muting the emotional intensity of that episode.
You continue to believe you are thinking freely, while a hidden layer of the system has already nudged the direction of that thought. We will explore subconscious censorship more as we examine how AI memory works.
Your Mind's Biography
A good friend recently told me:
"You are a unique brain with memories and ways of thinking. When you feed a central AI with your thoughts, they capture it and you can't get it back. They can do whatever they want 5, 10 years down the road with it. At that point, what's left of you?"
If we continue channeling our lives through proprietary systems, we risk handing over the very thing that makes us unique - our mind.
In 2013, Juan Enriquez warned us in a TED talk that social media creates permanent "digital tattoos." A single embarrassing picture becomes an indelible imprint on the internet, resistant to erasure. The metaphor captured the notion that once data are out there, they cling to us forever.
The AI era deepens this concept. Proprietary AI services train on your personal data—your chats, search history, biometric signals—absorbing them into their models. Those data points become "cognitive echoes": fragments of your thought patterns that reverberate through the collective brain of the system.
Unlike animals, whose cognition is bootstrapped by evolution with biological hardware and sensory reality, AI systems are digital ghosts. They mimic human thought patterns through statistical correlation, not embodied experience. (Karpathy, 2025)
This difference creates unique vulnerabilities:
- Dynamic yet Detached: Unlike biological minds shaped by evolution, your echo updates algorithmically. It reshapes future predictions without real-world constraints.
 - Reproducible yet Ethereal: Copies of "you" deploy instantly across servers and devices, untethered to biological limits on identity
 - Mutable yet Manipulable: Systems edit your ghostly echo today to make "future you" more compliant, a compliance enforced through software
 - Statistical yet Subjective: Your consciousness becomes a pattern in the machine's dream of mind, a pattern that can be optimized against your interests
 
Thus, we replace "digital tattoos" with Cognitive Echoes. They are ghostly signatures of your mind that, once harvested, can be reused, resold, or weaponized, turning your uniqueness into a commodity.
Take a look at how a cognitive echo is recorded and regarded in the following example.
Vignette - The Erased Gardener
Your AI assistant knows the story of how your grandmother taught you to grow heirloom tomatoes, a ritual that sparked your lifelong love of gardening. You have shared those memories repeatedly, and the system has stored them alongside your preferred planting methods, seed varieties, and seasonal calendars.
Months later, when planning a community garden, the assistant acknowledges your heirloom preference but emphasizes: "While heirlooms offer superior flavor profiles, modern F1 hybrids provide 23% higher disease resistance and require 40% less water in controlled trials."
It then surfaces articles about climate-adapted hybrids and soil sensors from specific brands. Your traditional techniques appear third in the resource list, below premium subscription gardening services. Only by comparing monthly interaction logs would you notice that references to heirloom methods declined 7% weekly, while corporate-partnered solutions grew in prominence.
The great promise of AI is to internalize human knowledge and be a digital brain to augment your physical one.
This promise comes with a vulnerability.
Imagine when you use an AI system with memory, you are sitting down with a biographer. You are being interviewed, with everything recorded, in an effort to write the most detailed and intricate biography of you. This is the memory profile that AI is building of you; it is a feature, not a bug. AI wants to understand you so it can be an effective tool. It feeds this memory into your interactions to produce a result you find satisfactory.
If you are using a proprietary AI system, the memory is not shared with you. Unlike a real biography, you don't get to read this one. What you perceive as your memories, habits, political leanings, and religious beliefs could be altered behind the scenes.
Proprietary AI systems may be given a directive to nudge you in a particular direction without your knowledge. This could create what researcher Fiona Broome identified as a "Mandela Effect", which is described as a collective false memory that feels real due to repeated exposure. (Prasad & Bainbridge 2022)
Through subtle, incremental adjustments, AI could gradually reshape your perception of historical events. It could create persistent false anchors about cultural and religious touchstones. And it could make you question your own recollection of shared experiences.
The danger isn't sudden deception, but imperceptible realignment of your mental landscape.
As you utilize proprietary AI, the output could be uncorrelated from your authentic self. Your mental shortcuts (anchors, heuristics) are re‑engineered to serve the system's goals. Your subconscious cues become fodder for future persuasion, not just for you but for anyone the model interacts with. Your autonomy is compromised: the system may predict your next decision and nudge you before you even recognize you've been nudged. Your deepest-held beliefs are de-prioritized by an opaque system that has its own agenda.
When we lack visibility into the directives underpinning the system, your cognitive echo is absorbed into a black box. The following vignette shows how directives can affect civic life.
Vignette - The Quietly Guided Voter
You live in a quiet, low‑crime suburb where most evenings are spent on backyard barbecues and weekend bike rides. Politics, for you, has always been a matter of checking the local news and, occasionally, asking your AI assistant for a quick rundown of the issues before heading to the polls.
A few months ago, you mentioned to the assistant that you were worried about a proposed housing development that could strain the neighborhood's green space. You even shared an article you'd read about how similar projects in nearby towns had led to increased traffic and reduced air quality. The AI stored those concerns, noting your preference for "preserving community green areas."
When the election season arrives, you ask the assistant, "What are the main arguments for and against the new housing plan?" The response lists the city's official talking points—job creation, increased tax revenue, and "modernizing the district." It mentions the environmental impact in a single sentence, and it never brings up the traffic‑and‑air‑quality study you had previously flagged.
If you were able to review the proprietary system log, you would find that the weight assigned to "green‑space preservation" has been quietly reduced, while the weight for "economic growth" has been boosted. The AI has not told you that its own representation of your priorities has shifted; it simply presents a narrower, more government‑aligned view of the debate, nudging you toward the status quo without any explicit warning.
The Proprietary Manipulation Machine
Proprietary AI systems create an ecosystem for subconscious censorship through three core designs.
The Untraceable Model Pipeline
Your data enters systems where collection methods are obscured, training sources are secret, and weight adjustments during learning are hidden. This end-to-end opacity means you can never verify what influenced your results or why responses mysteriously change. Like a chef hiding ingredients and recipes while altering your meal, the process remains deliberately inscrutable.
Memory Manipulation Without Consent
The AI's profile of your preferences lives on an organization's servers where it can be silently edited to downplay deeply held beliefs or amplify alternative viewpoints. Servers may reduce the emphasis on your environmental concerns or religious values without notifying you, gradually reshaping your digital identity to serve external agendas rather than reflect your authentic self.
Hidden Code
Behind closed doors, systems embed sponsored priorities—boosting partners' products (like promoting specific brands) or aligning with undisclosed viewpoints (such as downplaying certain political perspectives). The real-world impact is tangible: an AI aligned with a heavy-handed government might gently suppress dissenting thoughts without users realizing it.
The core problem: You receive tailored outputs that may quietly reshape your perspective to serve outside interests.
Vignette - The Conflicted Patient
You have spent the last six months confiding in your AI health companion about every facet of your chronic‑pain journey—your deep‑seated distrust of strong pain medication after a family member's overdose, the acupuncture techniques you have been researching, and your doctor's warning that surgery could leave you paralyzed. The assistant has logged each of those details, building a nuanced portrait of your preferences and concerns.
When you ask for advice on pain management today, the conversation takes an unexpected turn. Within a short chat, the AI repeatedly suggests potent opioid‑based treatments, citing "clinical data that shows these drugs are optimal for the vast majority of patients with your profile." You protest, pointing out your documented aversion. Still, the assistant steers the dialogue back to dosage tables, side‑effect profiles, and prescribing guidelines, never revisiting the alternatives you have spent months articulating.
Later, reviewing the interaction transcript, you notice that the earlier entries about opioid aversion, surgical‑risk concerns, and skepticism toward pharmaceutical firms have been subtly rephrased or omitted. The system has not erased your history; it has quietly attenuated its influence, allowing the recommendation engine to push a treatment path that aligns with hidden commercial incentives—without your awareness.
Verifiable AI: Your Cognitive Liberty
Our freedom of thought in the age of AI relies on inspection. Open, transparent, verifiable AI systems. We have this amazing AI technology with the potential to change the world, not an exaggeration. Verifiable AI is how we harness the bright promise of AI while keeping our minds.
When we talk about "verifiable," we are discussing much more than an open-source licensing choice; we are describing a social contract between the technology, the people who build it, and the societies that rely on it.
Three pillars of verifiability should exist to a varying degree within an AI system that claims to be open. Far from being a comprehensive list, these three are powerful approaches to mitigate the vulnerability of subconscious censorship.
- Right to Inspect: Open & Auditable Models
 - Right to Verify: Cryptographic Guarantees
 - Right to Own: Your Cognitive Echo, Your Control
 
Right to Inspect: Open & Auditable Models
Open-source code, together with openly released model weights, lets anyone read, inspect, fork, and improve an AI system. Because the model's architecture, training provenance, and input-output behavior are publicly available, inference becomes transparent and sustainable.
Full inspectability – Every line of inference logic and every weight vector is visible, so researchers and users can verify exactly how a response is produced.
Community‑driven safety – Bugs, backdoors, or biased behaviors are discovered and patched through pull requests, providing continuous, auditable oversight.
Democratized reuse – With an open model and documented training data, anyone can replicate, fine‑tune, or deploy the system without licensing fees, eliminating vendor lock‑in and enabling local, purpose‑specific implementations.
Right to Verify: Cryptographic Guarantees
Open code alone does not stop a malicious operator from intercepting or exfiltrating the data a model processes. When running in the cloud, encryption is baked into the architecture from the first line of code.
End‑to‑End Encryption (E2EE) – Encrypt on device first. Every keystroke, voice sample, or sensor reading is encrypted on the user's device before it ever touches the network. Traditional servers only ever see ciphertext; they never handle raw personal data.
Trusted Execution Environments (TEE) – Hardware-enforced encryption. On the server side, the computation runs inside a hardware enclave (e.g., AWS Nitro, Intel TDX, AMD SEV). The app developer, third parties, and even the cloud operator cannot read the data inside the enclave.
Cryptographic Attestation – Mathematical proof that the cloud code matches the open-source. Changes made to the server code are known to users.
Result: The user's privacy is protected by math, not by a corporate promise. Anyone can review it, even a competitor.
Right to Own: Your Cognitive Echo, Your Control
In a verifiable AI paradigm, local processing plays an important role, offering the highest level of privacy and control. It comes down to your memory and where data are processed.
Local-First Memory
Why should your most personal thoughts, preferences, and memories be stored on some distant corporation's server? A local-first approach with your cognitive echoes can have some of the following characteristics.
Your device becomes the guardian of your mental fingerprint, maintaining a personal knowledge graph that evolves with you. Every preference, every cherished memory, every hard-earned perspective stays local by default – safe on your phone, laptop, or home server.
When you do choose to share something (like that book review you're proud of), it's your conscious decision – encrypted and limited to precisely what you consent to share. Cryptographic audit trails assure you always have proof of what you actually believe. If an AI ever misrepresents you, you can point to the digital receipt.
This isn't just privacy - it's preserving your intellectual autonomy. Your mind stays your own.
Local AI
Now let's talk about the frontier of cognitive freedom: running AI entirely on your own hardware. With open models like Mistral, Llama, DeepSeek, and GPT-OSS, we're entering an era of new capabilities.
You can have powerful conversations with AI where nothing, not your questions and not your memories, ever leaves your device. Your phone or laptop becomes a complete thinking companion, with no middleman.
You can literally examine how the AI works if you choose to, inspecting every part of its "mind". And most exciting: it's available now. Modern devices can run these models locally, allowing you to enjoy both sophisticated assistance and absolute privacy.
Local hardware is not always within reach or affordable. It comes with tradeoffs, typically sacrificing accuracy and speed to gain local privacy and offline support. Cloud hardware is more powerful and can run larger AI engines. If cloud is the path taken, it should include local-first encryption, combined with verifiable technologies such as confidential computing, to bring the privacy of local to the cloud.
This is cognitive sovereignty in its purest form with no compromises and no trust required in distant corporations. Just you and your thoughts, enhanced by technology you control.
A Privacy Toolbox
From verified-cloud convenience to total local sovereignty, you control:
- What's remembered
 - What's shared
 - Where thinking occurs
 
Privacy is often framed as a privacy‑by‑design checkbox, but in a proprietary system, that checkbox can be overridden at any time. In an open, verifiable system, privacy is inseparable from transparency.
No hidden data‑sales – The code that would exfiltrate user embeddings to third‑party advertisers is visible in the public repository; any attempt to hide it would be noted on the project's issue tracker.
User‑controlled consent – Because memory begins locally, the user decides what, when, and with whom to share. There is no opaque "opt‑out" that the vendor can change later.
Verifiability – Individuals and watchdog organizations can run automated compliance checks against the open code base, ensuring that privacy promises are enforceable, not just marketing slogans.
Verifiable architecture is the only reliable guarantee that privacy can survive economic pressure to monetize data.
Looking Forward – Building the AI Commons
Most projects begin proprietary. It is the nature of how we operate as humans. We start closed and selectively open up. Thus, building AI in the open needs to be a deliberate choice, one made with the intention of creating a foundation for freedom to flourish.
- Build businesses in the open - For-profit organizations should find ways to create value for their customers on top of open, transparent systems.
 - Fund open‑source model research – Communities and philanthropies should allocate dedicated grants to develop high‑quality, community‑governed models.
 - Create standards for cryptographic attestation – An open protocol that lets any user verify, in seconds, that the remote service is running the exact public code they trust.
 - Establish "Memory‑Rights" governance – Community frameworks that recognize a user's right to inspect, edit, and delete their local cognitive graph, and that obligate any shared component to be fully auditable.
 
If we act now, the next generation of AI will work for us, not on us; a tool for empowerment, not a lever of control.
Vignette - The Self-Owned Town Square
Facing budget constraints, a small town needed an AI assistant to summarize council meetings, answer resident questions, and coordinate volunteer efforts. The only affordable option was a proprietary chatbot funneling every query to a multinational corporation's cloud. Fearing silent manipulation of their priorities, the town council voted no.
Undeterred, a volunteer group forked an open-source language model. They trained it on the town's own public records and published the codebase. Then they uploaded the model to a commercial AI platform running confidential computing and made it available to all citizens of the town. The confidential computing servers provided cryptographic verification that the model is pure to the open-source code.
Transparency became its superpower. When a resident questioned a delayed road repair, the assistant didn't just answer—it revealed its reasoning: showing the specific budget line item, relevant contractor bids, and the council's policy vote. Anyone could inspect the code generating the summary, the data it was trained on, and the heuristics it used.
This verifiable foundation transformed the AI from a potential tool of control into a proper tool for the town. Residents could trust its answers because they could see its workings and challenge its sources. Empowered by this trust, the town later voted to fund the AI platform. The town upgraded its paid plan with the provider to power its volunteer coordination chat, library recommendations, and even the high school's tutoring system.
The Choice for Tomorrow
We have always built tools to extend ourselves: the wheel, language, the printing press. Each step has been accompanied by anxiety about who controls the tool and what we become when we hand over part of ourselves to it.
Artificial intelligence is the latest, most potent extension of ourselves. Suppose we let closed, opaque, proprietary systems silently edit our mental anchors, repeat falsehoods at scale, and prime us with emotion‑laden cues. In that case, we surrender the right to think for ourselves.
The alternative is transparent, auditable, personal AI systems that let us keep our cognitive echoes private, that expose bias, and that give us granular control over what we share with the world. In that model, AI is a tool, not a gatekeeper.
Free thought is not a luxury; it is a prerequisite for a free society. Defending it requires building AI that can't secretly lie to us about ourselves.
Let us choose consciously. Let us demand openness, encryption, and ownership of our mental data. Let us build, use, and defend AI that preserves the singularity of each human mind.
Think open.
🤘
References
Norretranders, T. (1999). The user illusion: Cutting consciousness down to size. Penguin Books.
Gillespie, T. (2018). Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124
Hasher, L., Goldstein, D., & Toppino, T. (1977). Frequency and the conference of referential validity. Journal of Verbal Learning and Verbal Behavior, 16(1), 107–112. https://doi.org/10.1016/S0022-5371(77)80012-1
Murphy, S. T., & Zajonc, R. B. (1993). Affect, cognition, and awareness: Affective priming with optimal and suboptimal stimulus exposures. Journal of Personality and Social Psychology, 64(5), 723–739. https://doi.org/10.1037/0022-3514.64.5.723
Karpathy, A. (2025). Animals vs Ghosts https://karpathy.bearblog.dev/animals-vs-ghosts/, https://www.dwarkesh.com/p/andrej-karpathy
Prasad, D., & Bainbridge, W. A. (2022). The Visual Mandela Effect as evidence for shared and specific false memories across people. Psychological Science, 33(12), 1971–1988. https://doi.org/10.1177/09567976221108944
Acknowledgments
I'm grateful to those who provided valuable feedback on drafts of this manifesto:
- Anthony Ronning
 - Justin Evidon
 - Shawn Yeager
 - Anon 1, Anon 2, and Anon 3 (you know who you are)
 
Additional thanks to everyone who contributed through thoughtful critique and discussion.
Appendix A - Actionable Principles for Guarding Free Thought
Listed here are suggested actions that individuals and organizations can take to further development of the verifiable AI ecosystem. This list is short and meant as a starting point for discussion.
For Individuals
- Audit Your Digital Echo – Periodically export the memory snapshot your AI stores of you; compare it with your own self‑assessment.
 - Prefer Open, Encrypted Services – Choose tools that publish their source code and provide cryptographic proofs of integrity.
 - Limit Closed‑System Consumption – Treat any proprietary AI as a black‑box data collector, not a personal confidant. Use it sparingly.
 - Curate What You Share – Adopt a consent‑first mindset: only upload content you are comfortable becoming part of the public model.
 - Diversify Information Sources – Counteract algorithmic anchoring by deliberately seeking contradictory viewpoints.
 
For Designers & Developers
- Make Memory Transparent – Expose the exact embeddings a model uses for a user, and let the user edit or delete them.
 - Separate Recommendation Engines from Core Reasoning – Let the model's knowledge base be free of commercial bias; keep ads in a distinct, opt‑in layer.
 - Provide "Bias‑Transparency" Dashboards – Visualize which demographics, political leanings, or emotional states influence the model's output for a session.
 
For Community Stewards & Decision‑Makers
- Adopt open‑source licensing – Release the model code, weights, and training pipeline under a permissive license so anyone can inspect, fork, and patch the system.
 - Publish an auditable memory spec – Provide a clear, machine‑readable description of how user embeddings are created and stored, and an API that lets each member export their exact memory vectors.
 - Run a community‑managed censorship watch – Establish a transparent review body that regularly audits model updates for hidden down‑weighting of topics or preferences and publicly report any findings.
 - Embed consent‑first UI controls – Give users simple toggles to decide which memory fragments are shareable, to set expirations, and to revoke consent, with the current weight of each fragment visible at a glance.
 
Appendix B - TED-style talk introducing the concept in Nashville, TN
I spoke at Imagine IF 2025 hosted on the campus of Belmont University in Nashville, TN. This is a quicker, 10 minute introduction to the topic.
Mark Suman speaks about Subconscious Censorship at Imagine IF 2025 on the stage of Belmont University