AI Manifesto
AI is a powerful tool that could revolutionaise our society in many different ways and reshape how individuals interact with others. Like all innovations, the tools themselves are neither inherently good nor inherently bad. Whether they will have a positive or negative impact on the society depends on the users of the tools. In particular, what problems are we solving with these tools, what understandings we have of the consequences (both good and bad) they could bring, and what guardrails we have in place to reduce or even prevent unintended harmful consequences.
Hence, I created this AI manifesto to help me think through when, what and how I use AI, the principles that guide my decisions and some specific dos and don’ts. The development and advancement of AI will not stop and I will regularly update this manifesto as I learn more about AI, gain more experience using it across different areas, and see the technology evolve.
Principles
Personal relationship is key
As a Christian, I believe God creates humans to be relational and we are called to love and care for one another. While human interactions are not always perfect and require effort and humility to keep it going, we should never replace human interaction with bot interactions (which are tailored and designed to make us feel good)
Hard work and friction can be good
We should not mindlessly chase after efficiency by removing all the hardworks and frictions in the things that we worked on. We learn and grow through hard work and friction. They help us think carefully about whether something is worth pursuing. They also serve as limiters that push us to prioritise better and invest our energy and effort in things that truly matter to us.
Quality over quantity
What matters most is what we create and why we create. The actual value we can bring to the world is by creating things that deliver value responsibly. While I am all for pragmatism and agility, there is a subtle difference between “ship fast and iterate” and “just throw everything on the wall and see which one sticks”. AI enables us to create (or generate?) many things quickly and cheaply. Rather than create more stuff within the same timeframe, we should invest the time we saved in pondering and thinking harder on “why we create this thing”.
Guardrails
Be a responsible centaur but not a reverse centaur
Inspired by this post, we should use AI to assist our work, like how centaurs use machines to improve their work, but not become reverse centaurs who serves AI as their physical limbs. In particular, I will insist on doing the hard work to “think through writing” and not let AI to take that away from me.
Never personalise and anthropomorphise AI
I believe God made humans in His own image, and thus humans are more than mere beings with broad knowledge and the ability to answer questions. While a large language model (LLM) can interact with its users in natural language, we should not expect it to behave like a human and interact with them as we would.
Zero trust on LLM’s answer and generated content
Advancements in maths and technology have drastically improved AI’s ability to process data and identify correlations and patterns. It is great at predicting outcomes based on the vast amount of data it processed and the numerous “trials and experiments” conducted. However, this makes it a great “prediction machine” but it will never be a “truth-telling” machine, and we should always be skeptical of AI’s output and apply our own judgment.
Areas I’m experimenting with AI
Programming - navigating a complex codebase
A complex code base is hard to understand and to get started with. There is a vast amount of code written with different tradeoffs, styles and contexts. It’s like walking into a jungle, not knowing what to expect. Since a LLM is good at processing large amounts of content and generating summaries, we can use it to generate a high-level overview of a complex codebase and use them as a “map” for navigating it. We could also use an LLM to “zoom in” on different areas of the codebase and ask it to look for specific things. When used well, the LLM could become the “map and compass” that assist programmers in navigating complex codebase.
Ideation - rapid, throwaway prototyping
One of the key aspects of successful software is usability. We can indeed ideate, design, and test the user experience of our design with a paper prototype, but they can only go so far. With LLM-assisted prototyping, we could create a “clickable” prototype at a much lower cost (both in terms of time and labour cost) and give testers an experience closer to the real product, thereby improving the quality of their feedback.
Pondering - get more out of what I learn
LLM models trained on massive amounts of internet data are good at assimilating similar ideas and generating responses from different perspectives. What if we make use of these LLMs to challenge our views and opinions? What if we ask them to play devil’s advocate and prompt us to debate our ideas against different ideaologies? However, I need to make sure I am only using LLMs to generate questions and challenges as discussion materials with others and not merely debating with a machine by myself.