The Antsy ML of 2022
The year 2022 brought us Stable Diffusion and the ChatGPT, the new interface to the GPT-3 model. While neither technology is new, they have crossed some threshold and created a splash and This New Feeling. And I think it’s just because they hit too close to home.
The promise of AI replacing a wide variety of jobs was there since its inception. It got real in the last decade, and we had a variety of futurists predicting which one will go first. Yet a lot of those futurists were technologists, and there were always two groups that felt safe: developers (somebody has to take care of those machines!) and artists (Humans! Emotions!). This year made both antsy. Both GPT and SD crossed the subjective threshold of usefulness, opened the mainstream eyes to the future potential, and amazed everyone with the speed of learning.
I think we’re still quite far from replacing both roles completely. However, as part of a wider trend, we’re almost there with replacing the junior, entry-level jobs. Concept art of a 3D model of a new game character? Not yet. Small picture on the wall in a location the player is running through? Totally, possibly with the location-specific soundtrack. Design of complex distributed systems, or exploiting new technologies for innovative businesses? Not yet. Generating that React component for you? It’s already in your browser.
This will not replace the top 10% (or 1%) but will do for the rest, and the path to making it will be longer. I am interested in where the limit of the system caused by the training data is. Music with unique instruments or interesting code tricks that were not part of the training data. I joked at work that people should dump BTC for an “investment code” instead.
I expect growing pushback as this involves wealthy and powerful classes, so it will not be shrugged over as quickly as factory workers. An obvious friction point is around the copyright and IP of the input for the models, and I wouldn’t be surprised if there’d be more regulation into what gets fed in, e.g. requiring explicit consent. I am even more interested in the liability: if developers start using Copilot-like systems as part of their work, I can see how they can soon create systems beyond their level of understanding. In that case, I’d expect the users to ask providers for guarantees, and that will create an interesting problem akin to autopilot in cars.
Either way, I expect AI literacy to soon turn into a new digital literacy: a differentiator between the socio-economical groups with widening standards of living and views of the world. Who has access and under which conditions may shape new power dynamics quite soon.
Published in Notes and tagged could be written by GPT • employment • unsupported random ideas