Cod­ing with Vibes

12:00PMApril 3 2025Daniel Tompkins

Archive

Cli­mate sci­en­tists and an­thro­pol­o­gists have said that we are now living in the An­thro­pocene— a pe­riod of time "during which hu­manity has be­come a plan­e­tary force of change."1 For me (a pro­fes­sional soft­ware en­gi­neer) and many others, the im­pact of ar­ti­fi­cial in­tel­li­gence (AI) has be­come a defining fea­ture of this era.

Ad­vance­ments in AI tech­nology have al­ready begun to trans­form many as­pects of our lives— from how we work to how we in­teract with the world around us. At times, re­ality feels dif­fi­cult to dis­tin­guish from an episode of Black Mirror — I sus­pect I may be a pow­er­less wit­ness to a dystopian fu­ture un­folding. Si­mul­ta­ne­ously, the cycle of re­lent­less in­no­va­tion is en­rap­turing.

A view of the Earth at night, showing city lights across the continents
By Data: Marc Imhoff/NASA GSFC, Christopher Elvidge/NOAA NGDC; Image: Craig Mayhew and Robert Simmon/NASA GSFC https://visibleearth.nasa.gov/view.php?id=55167 (image link), Public Domain, https://commons.wikimedia.org/w/index.php?curid=233702

Large Lan­guage Models

With a deep in­terest in com­pu­ta­tion, I can't help but feel com­pelled to follow the latest trends in pro­gram­ming— and to ac­tively ex­per­i­ment when I can. In re­cent months, it's been hard to ig­nore the adop­tion of AI-as­sisted pro­gram­ming tools like GitHub Copilot, Cursor , Con­tinue , and others.

These tools leverage large lan­guage models (LLMs) to as­sist de­vel­opers in writing code more ef­fi­ciently. They can sug­gest code snip­pets, com­plete func­tions, and even gen­erate en­tire blocks of code based on nat­ural lan­guage de­scrip­tions.

Knowl­edge Workers

As part of my qual­i­fying grad­uate pro­ject, Signs of Life, I ex­plored the con­cept of knowl­edge work and its im­pli­ca­tions for the fu­ture of labor.

The core idea be­hind knowl­edge work is that it re­lies heavily on cog­ni­tive skills, problem-solving, and cre­ativity. Un­like tra­di­tional manual labor, which often in­volves repet­i­tive phys­ical tasks, knowl­edge work re­quires crit­ical thinking and the ability to process and an­a­lyze in­for­ma­tion.

This in­cludes a widening range of pro­fes­sions— from soft­ware de­vel­opers to re­searchers, writers, and be­yond. The rise of AI has sig­nif­i­cantly im­pacted these fields, au­tomating rou­tine tasks and aug­menting human ca­pa­bil­i­ties.

The End of Work

The World Eco­nomic Fo­rum's 2020 Fu­ture of Jobs Re­port has es­ti­mated that by 2025 85 mil­lion jobs may be dis­placed due to the adop­tion of AI and au­toma­tion. How­ever, they also pre­dict that 97 mil­lion new roles may emerge that are more adapted to the new di­vi­sion of labor be­tween hu­mans and ma­chines.

Many pro­fes­sional roles are al­ready being re­placed as AI agents rapidly ma­ture. In Oc­tober 2024, CEO of Google Sundar Pichai said that "AI is writing more than 25% of new code at Google."2 Elon Musk, an ally of the Trump ad­min­is­tra­tion and the world's richest person, has sim­i­larly sug­gested a fu­ture in which "no job is needed" be­cause ro­bots and AI will sa­tiate all our human needs and de­sires. Bill Gates, the 6th richest person in the world and co-founder of Mi­crosoft, has also pre­dicted that hu­mans won't be needed "for most things".3

In Jan­uary of 2025, in an in­ter­view with Joe Rogan, Mark Zucker­berg— now the third-richest person in the world— also re­marked that AI will re­place mid-level en­gi­neers in 2025. Rea­son­ably, these sta­tis­tics and de­c­la­ra­tions have sparked de­bate about the fu­ture of work, in­come in­equality, and the need for new so­cial safety nets.

Homo Deus

In his book Homo Deus , Yuval Noah Harari— a well-known his­to­rian and pop­ular sci­ence writer, dis­tills life down to the flow of in­for­ma­tion. His ideas about the fu­ture of au­toma­tion and human aug­men­ta­tion were in­flu­en­tial in my grad­uate studies.

He talks about the im­pact of eco­nomic dis­parity in a spec­u­la­tive fu­ture where hu­mans have be­come highly aug­mented by ad­vanced tech­nology. Po­lit­ical and eco­nomic su­premacy would be de­ter­mined by the so­phis­ti­ca­tion of a given na­tion's tech­nology— specif­i­cally, their AI agents.

Aug­men­ta­tion

Harari's spec­u­la­tive fu­ture is at once hopeful and hor­ri­fying. In­cluded in his pre­dic­tions is the in­evitable melding of dig­ital com­puters with the human mind. At first, such tech­nology ap­pears out­landish and im­plau­sible— the stuff of true sci­ence fic­tion.

That is, until you see these pre­dic­tions are padded with real ci­ta­tions to ground­breaking work in nuero­science and com­pu­ta­tion. Sci­en­tists and en­gi­neers have suc­ceeded in rat mind-con­trol , men­tally-ac­ti­vated ro­botic pros­thetics, and hel­mets that can di­rectly ma­nip­u­late your neural ac­tivity (pro­ducing or in­hibiting feel­ings sim­ilar to a psy­choac­tive drug).

Brain-to-Text

Re­cently, a paper pub­lished by Meta de­scribed a ground­breaking ap­proach for de­coding brain sig­nals into text. This re­search uti­lized a non-in­va­sive method (mag­ne­toen­cephalog­raphy or MEG) to in­ter­pret neural ac­tivity as­so­ci­ated with lan­guage pro­cessing— ef­fec­tively al­lowing a di­rect "brain-to-text" com­mu­ni­ca­tion.

The first MEG imaging chamber was in­vented circa 1968 by Dr. David Cohen. Co­in­ci­den­tally, James E. Zim­merman (a re­searcher at Ford Motor Com­pany) de­vel­oped a crit­ical partner tech­nology around the same time— one of the first su­per­con­ducting quantum in­ter­fer­ence de­vices (a "SQUID").

Elekta's megneto-enchephalography (MEG) machine— a large crescent-shaped chair capable of recording the electromagnetic impulses of an organic brain.
Elekta's megneto-enchephalography (MEG) machine— a large crescent-shaped chair capable of recording the electromagnetic impulses of an organic brain.

A SQUID is "a very sen­si­tive mag­ne­tometer used to mea­sure ex­tremely weak mag­netic fields"— such as those emitted by the elec­trical ac­tivity of the brain. The SQUID al­lowed Co­hen's MEG ma­chine to suc­cess­fully mea­sure the mag­netic field at a number of points around a sub­ject's head.

Elon Musk's Neu­ralink com­pany and others have been de­vel­oping brain-com­puter in­ter­faces (BCIs) with sim­ilar goals— trans­lating brain ac­tivity into dig­ital sig­nals that can be in­ter­preted by a ma­chine. In Jan­uary 2024, Noland Ar­baugh (par­a­lyzed from a diving ac­ci­dent) ac­tu­ally be­came the first human pa­tient to be sur­gi­cally im­planted with one of their de­vices.4

Neu­ralink's im­plant BCI is an in­va­sive ap­proach— the de­vices are sur­gi­cally im­planted within the skull in order to record the elec­trical im­pulses of the brain without in­ter­fer­ence. The skull acts as a nat­ural shield to the brain's elec­trical ac­tivity, which leads to lower res­o­lu­tions with tra­di­tional EEG tech­nology.

The MEG ma­chines used in Meta's study, the Neu­romag pro­duced by Elekta, ap­pear to be a valu­able non-in­va­sive al­ter­na­tive for gath­ering training data— al­beit re-in­tro­ducing the problem of porta­bility.

The Vibe Coder

In Feb­ruary 2025, An­drej Karpathy (co-founder of OpenAI) coined the term "vibe coding". The vibe coder "fully gives in to the vibes"— re­lying on nat­ural lan­guage com­mands to de­velop with LLMs in a con­ver­sa­tional manner.

It's not re­ally coding— I just see things, say things, run things, and copy-paste things, and it mostly works.5

When some­thing doesn't work, the vibe coder would con­tinue to reen­gage the LLM in a con­ver­sa­tional feed­back loop— of­fering new de­tails and asking for ad­di­tional help.

Re­cep­tion

For many soft­ware en­gi­neers (my­self in­cluded), the con­cept of vibe coding has been met with heavy skep­ti­cism . The con­cept has often been re­duced to a meme— es­pe­cially as a way to shame in­ex­pe­ri­enced de­vel­opers for "coding" with little con­sid­er­a­tion for se­cu­rity or op­ti­miza­tion.

At the other end of the spec­trum, new pro­gram­mers are flocking to these tools and even praising vibe coding as the fu­ture of soft­ware de­vel­op­ment.

Ready or Not, Here I Come

Whether AI-as­sisted coding is ready for pro­duc­tion ap­pli­ca­tion de­vel­op­ment or not (and I think not), the cul­ture-shock it's sent rip­pling across the tech com­mu­nity is un­de­ni­able. I de­cided to make an earnest ef­fort to re­struc­ture my de­vel­op­ment en­vi­ron­ment so that I could begin testing a va­riety of these tools my­self.

Ad­mit­tedly, I don't work at Google or Meta or Mi­crosoft or OpenAI. I don't have a sense of how my ex­pe­ri­ence might differ from those who are ac­tu­ally using LLMs at the same com­pa­nies where they're being ac­tively de­vel­oped. I'd be very cu­rious to hear from those people and to know whether or not the ca­pa­bil­i­ties they're seeing are com­pa­rable— or if they're so oth­er­worldly to jus­tify some of the mas­sive lay­offs we've seen.

Through re­search, ex­plo­ration, and ex­per­i­men­ta­tion, I hope that I can much more in­tel­li­gently gauge the short­com­ings of LLM-as­sisted pro­gram­ming— as well as spec­u­late on how it might change pro­gram­ming for­ever.

Be­coming a Vibe Coder

Be­fore I de­scribe my ex­pe­ri­ence with AI-as­sisted pro­gram­ming, I think there's some im­por­tant con­text to set the stage:

Neovim

As a faithful acolyte of Neovim , I re­fused to give up the ter­minal for a GUI ap­pli­ca­tion. Per­haps one day I'll find rea­sons so com­pelling that I'll al­to­gether switch to an­other text ed­itor, but not today. Rather than pickup an en­tirely new in­ter­face like Cursor or Con­tinue, I de­cided to adapt Neovim with plu­gins.

Not all LLMs are equal.

  • Some are con­fig­ured for spe­cific pro­gram­ming lan­guages.
  • Some aren't re­ally made for pro­gram­ming-spe­cific tasks at all.
  • Models can be run on your local com­puter, not just in the cloud.
  • Cloud models will run faster.
  • Smaller models will run faster.
  • You should never load a model larger than your total avail­able memory (minus 1-2GB for op­er­ating system tasks) or it will run out of memory and go very slow.
  • An LLM without storage is like talking to a person with am­nesia— they won't re­member what you were talking about.
  • More con­text means more memory
  • More pa­ra­me­ters = more in­tel­li­gent re­sults (and more memory)
  • Model per­for­mance met­rics and rank­ings can be found on a number of web­sites

Tooling

The number of AI plu­gins for Neovim is growing ex­po­nen­tially each day. These things are changing so rapidly that my re­views and rec­om­men­da­tions might even prove to be use­less in a few months. Re­gard­less, here are the tech­nolo­gies which I found most useful in com­posing an AI-as­sisted de­vel­oper ex­pe­ri­ence:

Note on Neovim Plugins

I'm using lazy.nvim for Neovim plugin man­age­ment. My file­tree looks like this:

plugins.lua plugins/ ├── codecompanion.lua ├── markview.lua └── ...

Here's a link to my full Neovim dot­files on GitHub.

code­com­panion.nvim

The Code­Com­panion plugin al­lows a de­vel­oper to open a split-pane chat buffer, very sim­ilar to Con­tinue/​Cursor/​Copilot. Since the chat window is lit­er­ally a Neovim buffer, all the con­tent can be vi­su­ally se­lected and searched with stan­dard Vim short­cuts.

plugins.lua
{ "olimorris/codecompanion.nvim", dependencies = { "nvim-lua/plenary.nvim", "nvim-treesitter/nvim-treesitter", }, config = function() -- See dotfiles repo for my full CodeCompanion configuration -- https://github.com/l00sed/nvim-lua-config require("plugins.codecompanion") end }
lua

markview.nvim

I rec­om­mend this plugin (or, al­ter­na­tively, render-mark­down.nvim ) for ren­dering mark­down fea­tures in the ter­minal. This pro­vides much better read­ability to an LLM's mark­down output in the chat buffer.

plugins.lua
{ "OXY2DEV/markview.nvim", lazy = false, -- Recommended dependencies = { "nvim-treesitter/nvim-treesitter", "nvim-tree/nvim-web-devicons" }, config = function() -- See dotfiles repo for my full Markview configuration -- https://github.com/l00sed/nvim-lua-config require('plugins.markview') end }
lua

copilot.vim

Though I still run local models, cloud is gen­er­ally going to be better (faster and more in­tel­li­gent). This is be­cause cloud models are run­ning on the newest, most ex­pen­sive GPU hard­ware avail­able— and sending back re­sults about as fast as your In­ternet con­nec­tion. Those GPUs have mas­sive amounts of memory that can also quickly load a huge LLM (with many bil­lions of pa­ra­me­ters).

Local models still feel like having a Swiss army knife— ba­si­cally an of­fline Stack Over­flow . How­ever, Copilot (and other cloud-based models) will yield much better re­sults. You can use Copilot for free for a lim­ited number of re­quests. Copilot also has a great com­ple­tion mech­a­nism in Neovim— al­lowing tab-com­ple­tion on in­line vir­tual text sug­ges­tions. To­gether with Code­Com­panion, it can be run in chat-mode as well.

plugins.lua
{ 'github/copilot.vim', config = function() -- Remap completion keys vim.g.copilot_no_tab_map = true -- control + w (accept word) vim.keymap.set({ "i" }, "<C-w>", "<Plug>(copilot-accept-word)", { silent = true, desc = "Toggle CodeCompanion Chat" }) -- control + e (accept line) vim.keymap.set({ "i" }, "<C-e>", "<Plug>(copilot-accept-line)", { silent = true, desc = "Toggle CodeCompanion Chat" }) end }
lua

Ghostty

Being on MacOS, I can use the built-in dic­ta­tion func­tion for quick speech-to-text with any na­tive soft­ware. The Ghostty ter­minal em­u­lator is built as a na­tive MacOS ter­minal em­u­lator, so I'm able to press a shortcut and speak di­rectly to the AI chat buffer. Other na­tive ter­minal em­u­la­tors, like iTerm2, should be able to do the same.

To make the shortcut key more ac­ces­sible from the home row, I also up­dated my set­tings to trigger dic­ta­tion when pressing the CMD key twice:

Screenshot of the MacOS dictation settings, showing shortcut options.

Eval­u­ating my Vibe Coding Setup

Putting it all to­gether, these plu­gins pro­vide a setup that I be­lieve is fairly on par with some of the bat­teries-in­cluded GUI ap­pli­ca­tions like Cursor or Con­tinue. To test out the setup, I de­cided to chal­lenge my LLM-as­sis­tant with the task of pro­ducing a game of Snake as a React com­po­nent.

After about 18 min­utes of vibe-coding, I'm able to get a pretty de­cent game of Snake! LLMs are re­ally good for this kind of gen­er­a­tive work. As you can see in the video, there's still a lot of manual code ma­nip­u­la­tion to in­te­grate the new com­po­nent.

Use h, j, k, l or the stan­dard di­rec­tional keys to try it out. Watch out for the classic Uroboros gotcha.

Score: 0

Closing Thoughts

Nat­ural Lan­guage

Josh Comeau— a pop­ular tech blogger with in­spi­ra­tional fron­tend work— did an in­cred­ible demon­stra­tion of hands-free coding without nat­ural lan­guage dic­ta­tion in 2020. In­stead, Josh made use of the Talon Voice ap­pli­ca­tion. This soft­ware was de­vel­oped specif­i­cally to pro­vide an ac­ces­sible op­tion for pro­gram­ming using vocal com­mands rather than me­chan­ical input.

Josh wrote about his ex­pe­ri­ence with Talon at the same time GPT-3 was re­leased. In­stead of nat­ural lan­guage com­mands, Talon uses a cod­i­fied lan­guage where spe­cial sounds, like "slap", re­place tra­di­tional key­board ac­tions (like re­turning to a new line). I'd be cu­rious to read Josh's thoughts if he ever re­visits that journey with the ad­di­tional tech­nology avail­able today.

The ben­efit I see in nat­ural lan­guage dic­ta­tion is the ability to more quickly com­mu­ni­cate data. The LLM, working as a trans­lator, con­verts that nat­ural lan­guage to code. How­ever, LLMs aren't quite ready to copy the code into the right file or open the browser to test the re­sults. The human-in-the-loop is still very much needed.

Per­haps this lim­i­ta­tion will be erased with the onset of model con­text pro­tocol — en­abling LLM-pow­ered agents to di­rectly in­teract with other soft­ware.

Footnotes

  1. https://​en.wikipedia.org/​wiki/​An­thro­pocene

  2. https://​web.archive.org/​web/​20250225122509/​https://​www.busi­nessin­sider.com/​ca­reer-ladder-soft­ware-en­gi­neers-col­lapsing-ai-google-meta-coding-2025-2

  3. https://​www.cnbc.com/​2025/​03/​26/​bill-gates-on-ai-hu­mans-wont-be-needed-for-most-things.html#:~:text=Over%20the%20next%20decade%2C%20ad­vances%20in%20ar­ti­fi­cial%20in­tel­li­gence%20will%20mean%20that%20hu­mans%20will%20no%20­longer%20be%20needed%20%E2%80%9Cfor%20­most%20things%E2%80%9D%20in%20the%20­world%2C%20says%20­Bill%20­Gates .

  4. https://​www.bbc.com/​news/​ar­ti­cles/​cewk49j7j1po

  5. https://​www.busi­nessin­sider.com/​vibe-coding-ai-sil­icon-valley-an­drej-karpathy-2025-2