I read two interesting articles the other day. One about how you build personality into AI and the other about hands on experiences building AI products. They sort of complimented each other and also made me think about something i learned back in university when i studied urban resilience.
The first article talks about the personality of an AI and how that can be a selling point in the product market fit. OpenAI is for example more permissive than googles Gemini. This have some guys on twitter up in arms but making a “safe” AI can help attract different kinds of user groups. If a company want a certain tone and to prevent their AI assistant to say controversial stuff you can use Gemini, for others there’s Grok…
Thinking about how these companies are trying to compete on things other than accuracy, speed, etc. I thought would come later when these technologies are more commodified. Interesting.
Another things that struck me while reading it is that all of these companies are trying to position themselves to be THE AI service. Their ambition is to become the next platform in AI similar to Google in search, Facebook in socials, or Amazon in sales. right now we have so many options because everyone is trying to make a bet to be the next monopoly. Investors are happy to bet on a dark horse because, if worse comes to worse, they will be acquired.
There have been laws proposed about AI and what data they are based on and how they are allowed to be used but what we really need to have a discussion about is how to prevent more monopolies (and dismantle the ones we already have).
Going from the boardroom to the shop floor: this article is about the learnings during hands on building of AI products. The discussion in the news is that AI will come in and completely take over. You just feed the AI some input and write a prompt and they do the whole job. This framing reinforces the idea of a big monopoly platform that can do everything (and also AGI, a topic I dont want to touch with a stick).
One of the key takeaways from the post is that you shouldn’t try to do everything because it will be bad and you will fail. Instead find a narrow use-case and build a tailor made solution powered by AI that is tailor made to make it good. This is so realistic and practical that i was surprised to find it in a post about AI. And, it reflects the thoughts of someone working on the sharp end of AI.
While at university i studied urban resilience and safety and disaster studies. Very interesting topics for people who likes to mix technology, culture, organizations, and education.
During this i read about the concept of the “sharp and blunt end” of organizations. Let’s say that there is a big accident. The sharp end of that accent will be the ambulance people who come to the scene and pick up people and get them into the hospital. They are hands on, they are under time pressure, and they are working with limited understanding of the overall picture (among many other things).
On the other side you have the blunt end of the organization. These are the administrators and managers at the hospital, and at the really blunt end you have politicians who allocate budget. Here time move more slowly, you can wait with a decision until the next department meeting.
The needs and responsibilities are very different between the blunt and sharp end and it feels like we view the AI from a blunt perspective. the details and hands on needs are waved away and instead there are some poorly made studies that simplifies things way too much. Maybe it’s just the sources i read when it comes to AI and I should adjust my media diet.
I dont really have a point about this other than there were two articles that made me think and an old concept to tie them together. The blunt end/sharp end is also very useful tool when discussing B2B products, just throwing it out there. Give the articles a read.