Climate scientists and anthropologists have said that we are now living in the Anthropocene— a period of time "during which humanity has become a planetary force of change."1 For me (a professional software engineer) and many others, the impact of artificial intelligence (AI) has become a defining feature of this era.
Advancements in AI technology have already begun to transform many aspects of our lives— from how we work to how we interact with the world around us. At times, reality feels difficult to distinguish from an episode of Black Mirror — I suspect I may be a powerless witness to a dystopian future unfolding. Simultaneously, the cycle of relentless innovation is enrapturing.

Large Language Models
With a deep interest in computation, I can't help but feel compelled to follow the latest trends in programming— and to actively experiment when I can. In recent months, it's been hard to ignore the adoption of AI-assisted programming tools like GitHub Copilot, Cursor , Continue , and others.
These tools leverage large language models (LLMs) to assist developers in writing code more efficiently. They can suggest code snippets, complete functions, and even generate entire blocks of code based on natural language descriptions.
Knowledge Workers
As part of my qualifying graduate project, Signs of Life, I explored the concept of knowledge work and its implications for the future of labor.
The core idea behind knowledge work is that it relies heavily on cognitive skills, problem-solving, and creativity. Unlike traditional manual labor, which often involves repetitive physical tasks, knowledge work requires critical thinking and the ability to process and analyze information.
This includes a widening range of professions— from software developers to researchers, writers, and beyond. The rise of AI has significantly impacted these fields, automating routine tasks and augmenting human capabilities.
The End of Work
The World Economic Forum's 2020 Future of Jobs Report has estimated that by 2025 85 million jobs may be displaced due to the adoption of AI and automation. However, they also predict that 97 million new roles may emerge that are more adapted to the new division of labor between humans and machines.
Many professional roles are already being replaced as AI agents rapidly mature. In October 2024, CEO of Google Sundar Pichai said that "AI is writing more than 25% of new code at Google."2 Elon Musk, an ally of the Trump administration and the world's richest person, has similarly suggested a future in which "no job is needed" because robots and AI will satiate all our human needs and desires. Bill Gates, the 6th richest person in the world and co-founder of Microsoft, has also predicted that humans won't be needed "for most things".3
In January of 2025, in an interview with Joe Rogan, Mark Zuckerberg— now the third-richest person in the world— also remarked that AI will replace mid-level engineers in 2025. Reasonably, these statistics and declarations have sparked debate about the future of work, income inequality, and the need for new social safety nets.
Homo Deus
In his book Homo Deus , Yuval Noah Harari— a well-known historian and popular science writer, distills life down to the flow of information. His ideas about the future of automation and human augmentation were influential in my graduate studies.
He talks about the impact of economic disparity in a speculative future where humans have become highly augmented by advanced technology. Political and economic supremacy would be determined by the sophistication of a given nation's technology— specifically, their AI agents.
Augmentation
Harari's speculative future is at once hopeful and horrifying. Included in his predictions is the inevitable melding of digital computers with the human mind. At first, such technology appears outlandish and implausible— the stuff of true science fiction.
That is, until you see these predictions are padded with real citations to groundbreaking work in nueroscience and computation. Scientists and engineers have succeeded in rat mind-control , mentally-activated robotic prosthetics, and helmets that can directly manipulate your neural activity (producing or inhibiting feelings similar to a psychoactive drug).
Brain-to-Text
Recently, a paper published by Meta described a groundbreaking approach for decoding brain signals into text. This research utilized a non-invasive method (magnetoencephalography or MEG) to interpret neural activity associated with language processing— effectively allowing a direct "brain-to-text" communication.
The first MEG imaging chamber was invented circa 1968 by Dr. David Cohen. Coincidentally, James E. Zimmerman (a researcher at Ford Motor Company) developed a critical partner technology around the same time— one of the first superconducting quantum interference devices (a "SQUID").

A SQUID is "a very sensitive magnetometer used to measure extremely weak magnetic fields"— such as those emitted by the electrical activity of the brain. The SQUID allowed Cohen's MEG machine to successfully measure the magnetic field at a number of points around a subject's head.
Neuralink
Elon Musk's Neuralink company and others have been developing brain-computer interfaces (BCIs) with similar goals— translating brain activity into digital signals that can be interpreted by a machine. In January 2024, Noland Arbaugh (paralyzed from a diving accident) actually became the first human patient to be surgically implanted with one of their devices.4
Neuralink's implant BCI is an invasive approach— the devices are surgically implanted within the skull in order to record the electrical impulses of the brain without interference. The skull acts as a natural shield to the brain's electrical activity, which leads to lower resolutions with traditional EEG technology.
The MEG machines used in Meta's study, the Neuromag produced by Elekta, appear to be a valuable non-invasive alternative for gathering training data— albeit re-introducing the problem of portability.
The Vibe Coder
In February 2025, Andrej Karpathy (co-founder of OpenAI) coined the term "vibe coding". The vibe coder "fully gives in to the vibes"— relying on natural language commands to develop with LLMs in a conversational manner.
It's not really coding— I just see things, say things, run things, and copy-paste things, and it mostly works.5
When something doesn't work, the vibe coder would continue to reengage the LLM in a conversational feedback loop— offering new details and asking for additional help.
Reception
For many software engineers (myself included), the concept of vibe coding has been met with heavy skepticism . The concept has often been reduced to a meme— especially as a way to shame inexperienced developers for "coding" with little consideration for security or optimization.
At the other end of the spectrum, new programmers are flocking to these tools and even praising vibe coding as the future of software development.
Ready or Not, Here I Come
Whether AI-assisted coding is ready for production application development or not (and I think not), the culture-shock it's sent rippling across the tech community is undeniable. I decided to make an earnest effort to restructure my development environment so that I could begin testing a variety of these tools myself.
Admittedly, I don't work at Google or Meta or Microsoft or OpenAI. I don't have a sense of how my experience might differ from those who are actually using LLMs at the same companies where they're being actively developed. I'd be very curious to hear from those people and to know whether or not the capabilities they're seeing are comparable— or if they're so otherworldly to justify some of the massive layoffs we've seen.
Through research, exploration, and experimentation, I hope that I can much more intelligently gauge the shortcomings of LLM-assisted programming— as well as speculate on how it might change programming forever.
Becoming a Vibe Coder
Before I describe my experience with AI-assisted programming, I think there's some important context to set the stage:
Neovim
As a faithful acolyte of Neovim , I refused to give up the terminal for a GUI application. Perhaps one day I'll find reasons so compelling that I'll altogether switch to another text editor, but not today. Rather than pickup an entirely new interface like Cursor or Continue, I decided to adapt Neovim with plugins.
Not all LLMs are equal.
- Some are configured for specific programming languages.
- Some aren't really made for programming-specific tasks at all.
- Models can be run on your local computer, not just in the cloud.
- Cloud models will run faster.
- Smaller models will run faster.
- You should never load a model larger than your total available memory (minus 1-2GB for operating system tasks) or it will run out of memory and go very slow.
- An LLM without storage is like talking to a person with amnesia— they won't remember what you were talking about.
- More context means more memory
- More parameters = more intelligent results (and more memory)
- Model performance metrics and rankings can be found on a number of websites
Tooling
The number of AI plugins for Neovim is growing exponentially each day. These things are changing so rapidly that my reviews and recommendations might even prove to be useless in a few months. Regardless, here are the technologies which I found most useful in composing an AI-assisted developer experience:
I'm using lazy.nvim for Neovim plugin management. My filetree looks like this:
plugins.lua
plugins/
├── codecompanion.lua
├── markview.lua
└── ...
Here's a link to my full Neovim dotfiles on GitHub.
codecompanion.nvim
The CodeCompanion plugin allows a developer to open a split-pane chat buffer, very similar to Continue/Cursor/Copilot. Since the chat window is literally a Neovim buffer, all the content can be visually selected and searched with standard Vim shortcuts.
{
"olimorris/codecompanion.nvim",
dependencies = {
"nvim-lua/plenary.nvim",
"nvim-treesitter/nvim-treesitter",
},
config = function()
-- See dotfiles repo for my full CodeCompanion configuration
-- https://github.com/l00sed/nvim-lua-config
require("plugins.codecompanion")
end
}
markview.nvim
I recommend this plugin (or, alternatively, render-markdown.nvim ) for rendering markdown features in the terminal. This provides much better readability to an LLM's markdown output in the chat buffer.
{
"OXY2DEV/markview.nvim",
lazy = false, -- Recommended
dependencies = {
"nvim-treesitter/nvim-treesitter",
"nvim-tree/nvim-web-devicons"
},
config = function()
-- See dotfiles repo for my full Markview configuration
-- https://github.com/l00sed/nvim-lua-config
require('plugins.markview')
end
}
copilot.vim
Though I still run local models, cloud is generally going to be better (faster and more intelligent). This is because cloud models are running on the newest, most expensive GPU hardware available— and sending back results about as fast as your Internet connection. Those GPUs have massive amounts of memory that can also quickly load a huge LLM (with many billions of parameters).
Local models still feel like having a Swiss army knife— basically an offline Stack Overflow . However, Copilot (and other cloud-based models) will yield much better results. You can use Copilot for free for a limited number of requests. Copilot also has a great completion mechanism in Neovim— allowing tab-completion on inline virtual text suggestions. Together with CodeCompanion, it can be run in chat-mode as well.
{
'github/copilot.vim',
config = function()
-- Remap completion keys
vim.g.copilot_no_tab_map = true
-- control + w (accept word)
vim.keymap.set({ "i" }, "<C-w>", "<Plug>(copilot-accept-word)", { silent = true, desc = "Toggle CodeCompanion Chat" })
-- control + e (accept line)
vim.keymap.set({ "i" }, "<C-e>", "<Plug>(copilot-accept-line)", { silent = true, desc = "Toggle CodeCompanion Chat" })
end
}
Ghostty
Being on MacOS, I can use the built-in dictation function for quick speech-to-text with any native software. The Ghostty terminal emulator is built as a native MacOS terminal emulator, so I'm able to press a shortcut and speak directly to the AI chat buffer. Other native terminal emulators, like iTerm2, should be able to do the same.
To make the shortcut key more accessible from the home row, I also updated my settings to trigger dictation when pressing the CMD key twice:

Evaluating my Vibe Coding Setup
Putting it all together, these plugins provide a setup that I believe is fairly on par with some of the batteries-included GUI applications like Cursor or Continue. To test out the setup, I decided to challenge my LLM-assistant with the task of producing a game of Snake as a React component.
After about 18 minutes of vibe-coding, I'm able to get a pretty decent game of Snake! LLMs are really good for this kind of generative work. As you can see in the video, there's still a lot of manual code manipulation to integrate the new component.
Use h
, j
, k
, l
or the standard directional keys to try it out. Watch out for the classic Uroboros gotcha.
Score: 0
Closing Thoughts
Natural Language
Josh Comeau— a popular tech blogger with inspirational frontend work— did an incredible demonstration of hands-free coding without natural language dictation in 2020. Instead, Josh made use of the Talon Voice application. This software was developed specifically to provide an accessible option for programming using vocal commands rather than mechanical input.
Josh wrote about his experience with Talon at the same time GPT-3 was released. Instead of natural language commands, Talon uses a codified language where special sounds, like "slap", replace traditional keyboard actions (like returning to a new line). I'd be curious to read Josh's thoughts if he ever revisits that journey with the additional technology available today.
The benefit I see in natural language dictation is the ability to more quickly communicate data. The LLM, working as a translator, converts that natural language to code. However, LLMs aren't quite ready to copy the code into the right file or open the browser to test the results. The human-in-the-loop is still very much needed.
Perhaps this limitation will be erased with the onset of model context protocol — enabling LLM-powered agents to directly interact with other software.
Footnotes
-
https://web.archive.org/web/20250225122509/https://www.businessinsider.com/career-ladder-software-engineers-collapsing-ai-google-meta-coding-2025-2 ↩
-
https://www.cnbc.com/2025/03/26/bill-gates-on-ai-humans-wont-be-needed-for-most-things.html#:~:text=Over%20the%20next%20decade%2C%20advances%20in%20artificial%20intelligence%20will%20mean%20that%20humans%20will%20no%20longer%20be%20needed%20%E2%80%9Cfor%20most%20things%E2%80%9D%20in%20the%20world%2C%20says%20Bill%20Gates . ↩
-
https://www.businessinsider.com/vibe-coding-ai-silicon-valley-andrej-karpathy-2025-2 ↩