Podcast #6: Artificial intelligence in practice – tips, tools and safe use
In the next episode of the podcast “Na vlne kodu” (in Eng. On the Wave of Code), we were inspired by our listeners. We dive deeper into the world of AI with data scientist Michal Bystricky and system administrator Jakub Novak from msg life Slovakia. Listen and get valuable tips on how to use AI effectively in practice and how you can make it your ally instead of your competitor.

The AI era is already here, and it’s advancing continuously. It’s clear that we will encounter it more and more often, and it’s up to us to decide whether it will move us forward and open up new possibilities or leave us standing still. Artificial intelligence can help us save time and energy for the things we really enjoy. Today, we’ll take a look at some useful tips on how to use AI tools effectively.
Our invitation was accepted by data scientist Michal Bystricky, who specializes in language models and trend prediction, and system administrator Jakub Novak, who uses AI daily to optimize processes and avoid overload. Listen to the episode titled AI in Practice with our HR manager Ivana Hricova or read the transcript.
Michal, how would you explain to someone who doesn’t know much about AI what it actually is?
Michal: I see AI as a simplified human brain because I see a lot of similarities between the two. The latest models can already selectively forget irrelevant information and, like us, only remember important facts. In late 2024, Google introduced in an article “Learning to Memorize at Test Time” the next generation of Titans, also known as Transformers 2. The innovation lies in the fact that these models learn not only during training, but also during test time (inference time) – that is, at the moment when we interact with them.
This means that if we ask the AI model to be stricter, for example, it will remember that and be stricter the next time it responds. This process is now built directly into the model, so there is no longer a need to use external memory management applications, making it more flexible and able to adapt to new situations.
Another fascinating thing is the way the models store information. New research shows that this AI model learns better when something surprises it, much like humans. When it comes across unexpected information, it remembers it faster. The researchers are thus trying to replicate the way our brains work – the more cues we have about an event, the more deeply we remember it.
AI also learns to gradually forget irrelevant details so that it doesn’t get lost in the mass of information. The paper also describes that memory is part of a neural network and the new models have three forms of memory – short-term, long-term and persistent, which allows them to select relevant data and better respond to new situations.
Jakub, what is your opinion on the matter?
Jakub: I would explain it more simply. To me AI is still not at the level of the human brain, although it is moving towards that. Developers are using principles we already know from nature – when something works in the biological world, they try to apply it to machines. We do this in car manufacturing, in aircraft, in systems optimization, and now in artificial intelligence. So to me it’s still trying to get closer to that brain, but it’s still an online simulation of something that the brain is supposed to replicate.
If I had to describe AI to someone who had never heard of it, I would say it’s like having a personal assistant who can help with almost anything, from writing to solving technical problems. It’s not perfect, but the more effectively you use it, the more time and energy it will save you.
So, you’re saying that AI is inspired by the human brain in that it can learn through use, retaining important information while forgetting irrelevant details. However, AI is still far from being as complex as the human brain, although developers are increasingly applying its principles to make it work more efficiently. Right?
Michal: I would follow up on what Jakub mentioned when he said that AI is just an online simulation. What if the human brain is also a form of simulated intelligence? The entire universe, including us, operates based on the laws of physics and chemical processes. Human intelligence is the result of neural connections and chemical signals, which we only partially understand so far. What if it too is just a biological simulation? Why do we think ours is the “real” one? Maybe we are just preoccupied with our own subjectivity and experience.
Jakub: This is getting a bit too philosophical for me, haha! It’s eleven o’clock, and I’ve got a long day ahead of me. I’d hate to fry my brain. 😊
OK, let’s be practical. What was the first AI tool that you used? Was it ChatGPT, or something less well-known? What was your first reaction?
Michal: The first breakthrough article that influenced me was “Attention is All You Need” from Google. It introduced the first generation of Transformer models and the self-attention mechanism,which allows the model to analyze different parts of the input text with varying degrees of attention. This concept became the basis for other models, such as BERT and GPT, which began to specialize in different tasks – GPT for conversation, BERT for classification.
My first real exposure to AI came in 2020 with GPT-3, which was a major turning point for me. This model demonstrated far more sophisticated manifestations of intelligence, which came as a big surprise to me. Around the same time, another important article also emerged: “Scaling Laws for Neural Language Models”, which discusses scaling laws. In it, the authors showed that increasing the size of the model, the amount of training data, and the computation time has a directly predictable effect on improving its performance. This principle is the basis for the development of advanced language models today.
Jakub: I started experimenting with different online platforms that could rephrase the text, expand it, or change its tone. But it was nothing groundbreaking – there was always a need to provide input text, and the tools often made up the information. So in practice, I used them only occasionally.
The biggest problem was that when the AI added something, I had to check the result carefully, because it often generated incorrect or completely fabricated information. Either I didn’t know how to prompt correctly, or the model itself was still imperfect. I couldn’t get a consistently high-quality output.
The turning point came in 2023, when OpenAI released GPT-4. That was the moment when artificial intelligence really took off and reached the form we know today. For ordinary users, this leap was huge. It was in that year that GPT-4 gained enormous popularity, with one million users signing up in the first week alone. This shows how massive the demand for this technology has been and how quickly it has been adopted by the public.
How does artificial intelligence actually work? How does it understand our needs, anticipate them and react to make our lives easier?
Michal: When we talk about language models, their basis is the prediction of the next word – based on the previous text, they estimate what will follow. It’s basically like a form of statistical regression. AI models are essentially large statistical models that learn from huge amounts of data from the internet.
Interestingly, almost all the available data has already been used, so the focus of research today is on generating synthetic data to further improve the models. The new generation of models, like the aforementioned Transformers 2, brings a fundamental change – we will be able to extend those models during runtime (inference time). This means that models can learn, adapt and adjust their answers in real-time as we query them. This makes them even more accurate and efficient.
What is your view, Jakub?
Jakub: I look at it more from the user perspective. Every day, I work primarily with OpenAI models – GPT, DALL·E –, and, most recently, Sora. I’m not saying the other models are bad, but these are the ones I prefer.
For me, it’s not so important how they work, but what they can offer me. When I choose a tool, I evaluate it based on three main criteria:
- Will it help me in my personal development? Will I gain insights from it, and will it help me broaden my horizons?
- Will it save me time? Can it streamline my work or optimize tasks? Will it add value to my work or personal life in the long run?
- Can it generate income? Can I use it for something that will justify the cost of using the tool?
With GPT, I realized after a few days that it was a tool that would help me grow quickly and save time.
What has impressed you the most about ChatGPT? At what point did you decide that this was something you were going to use and that it was going to pay off?
Jakub: I first used it in high school. Back then, I didn’t know how to craft effective prompts to get the desired output. There was a big surge around it – a million users in less than a week – so I was worried it might be discontinued. The system was crashing frequently due to overload. I tried to learn how to use it before it became inaccessible. But that didn’t happen; it evolved over time, and today it’s far more stable.
Today, AI saves me hours of work and a lot of mental energy. It takes on tasks that would otherwise drain me – for example, brainstorming concepts. I recently needed to design a new model, and GPT helped me analyse the steps, determine the next course of action, and within nine hours, I had a working prototype. I didn’t have to manually analyse all the details or write lengthy documentation – GPT retained context and could clearly explain its output, such as a piece of code it designed.
In addition to creative support, AI also helps me with decision-making. When I have multiple options, I outline key points, and it helps me evaluate which is the most logical choice. Sometimes it’s useful to have an independent perspective, and AI can provide that.

Since the topic is AI in practice, could you give a few more examples of how it can be used effectively, and how often you use it?
Michal: First and foremost, I use AI at work – in programming and training models. Large language models help us find information and optimize processes.
In addition, I also use AI in investing, trading and news tracking. I work with time series models that can predict trends, and I build some of the models myself. They are much more sophisticated than conventional statistical indicators such as moving averages. I also use visual analysis models that can process images and analyse charts, identify levels or trends and then use them in various applications.
However, AI also helps me with my everyday learning and improves my understanding of the world. Nowadays, it’s relatively easy to understand complex research texts — simply enter them into ChatGPT and it will explain them to you in plain language. Anyone can try it out. It’s a huge help, particularly if you’re unfamiliar with the technical formulas or terminology. I recommend that everyone tries it with a topic they don’t understand to see how the AI can explain it.
And now for something a little more light-hearted – I also use AI for unconventional purposes. For instance, at the weekend I enjoy making creative breakfasts and arranging different images out of eggs and vegetables on my plate. What’s the ultimate level of this fun? I let the AI analyse my ‘artistic creation’ and tell me what it sees. It’s fascinating how it recognises different patterns — for instance, where I see a volcano with a lava-like egg yolk, the AI interprets the surrounding vegetables as flowers. I often think back on it and see something new in the image. It’s such an unexpected and creative experiment!
Jakub, what else would you add from practice?
Since I’ve been able to use GPT to search the internet and find the resources I need, I’ve been using Google a lot less. Now, when I need something, I ask GPT first and then head to the search engine. The AI quickly summarises the information and provides the source directly, saving me time.
What’s important to me is where the information comes from, not just what the AI tells me. I don’t care what it tells me, I care where it got it from. That’s why I always ask for the source. This means I don’t even have to formulate prompts. Just give me a brief description of the problem, and GPT will help me find the answer faster than if I try to formulate the question correctly in Google.
Could you suggest some practical uses of ChatGPT for complete beginners and advanced users alike?
Michal: ChatGPT is certainly an exciting tool. It allows you to process text, analyse images, think and make decisions. It makes my job a lot easier, especially in three areas.
For those who develop systems like me – models help us eliminate noise from user input. It used to be that the user had to articulate exactly what they wanted, but today it doesn’t have to be so precise. A model understands the approximate input and can extract the essentials from it. For example, when I’m programming, I don’t have to write full sentences, I just need a two- or three-word description, a few keywords, and the model already knows what I need.
The second area is logic and the ability to think. Sometimes, I pit multiple AI models against each other. One takes my input and generates an output — that is, it does what I want it to do. The other critiques what the first one produced. The third acts as a consensus moderator. I let them work this way for half an hour to get different perspectives on the problem. This often leads to solutions that I wouldn’t have come up with on my own.
The third area is handling complex topics and communicating in real time. ChatGPT helps me to understand even brand new or highly complex topics without me needing to be an expert on them. I simply ask it a question and it explains the topic to me in simple terms. I also use Advanced Voice Mode a lot, as it enables the AI to respond naturally. I can interrupt it and rephrase the question if I want a different answer. For me, ChatGPT is literally a digital wingman that I discuss various topics with on my way home.
Jakub, what is your view on ChatGPT?
Jakub: I use it every day – it’s an essential tool for me. However, I often notice how people around me who don’t have much experience with AI perceive it. They often think of it as a soothsayer that will instantly provide them with the exact answer they need. It just doesn’t work that way.
As Michal mentioned, you can type just two or three words and the model will understand, but only if it already has some context. I also use it by providing a brief summary, which ChatGPT then understands. However, if I start a completely new conversation, the results can be way off because the model has no prior information.
Let’s use an example. Imagine I want to eat. I’ll come to your place and say, “Make me my favorite food.” How would you respond? You have no idea what my favorite food is. So you’d probably try to improvise, because you have to do something. You have to come back with an answer. You’d probably try to make something that you think I might want. But would it really be what I want? Probably not. And ChatGPT works the same way – if you don’t give it context, it has to make assumptions.
But if I came to my mother or grandmother with the same question and they had that context, they would know right away what I like. They wouldn’t have to guess. The same goes for artificial intelligence – if it has context, it works more accurately. You don’t have to be afraid to guide the AI a little bit more, to interact with it. You can give it more detail in the beginning. Ask it what input it needs from you to give you the output you want.
People perceive ChatGPT as a fortune teller who instantly answers everything they need. But AI works more accurately when it has context.
However, one thing is certain. Artificial intelligence will never be able to cook as well as your grandma.
Jakub: It certainly won’t. But, for example, two weeks ago I wanted to make dinner and I only had a few ingredients. I put them in ChatGPT and asked what I could cook with them. The result was surprisingly tasty. So even when it doesn’t cook, it can come up with interesting recipes.
How can you ask ChatGPT questions correctly? I recently asked Michal a simple question: ‘When were you born?’ He answered: ‘In May.’ I was surprised because I wanted to know the year, but I had worded the question incorrectly. Does it work similarly with ChatGPT? If we phrase the question incorrectly, will we receive an inaccurate answer?
Michal: Exactly. If the question is vague or too broad, the answer will be similarly vague or broad. The general rule is that the more specific the question, the more precise the answer.
Humans make a number of mistakes when working with AI. One such mistake is combining too many questions into one. This can cause the model to become confused and not provide the exact answer we want. Therefore, it is a good idea to ask questions clearly and unambiguously.
Another common mistake is that people blindly trust everything AI generates. While ChatGPT is very advanced, it can still produce inaccurate or distorted information. That’s why it’s important to check the facts, for example by searching on Google. Just type in keywords and verify your sources.
Many also assume that artificial intelligence knows everything, but this is not true. Its knowledge is limited and it does not have up-to-date access to all information. Another mistake is using overly complicated language – if a question is unnecessarily complex, the model may answer less accurately. It is best to phrase questions simply and directly.
So, how do you get started with artificial intelligence, and what costs are involved? Are there any interesting tools that you would recommend to listeners? What can they be used for? And more specifically, how do you choose the right AI tool, and how much should you expect to spend on AI software subscriptions each month?
Jakub: At the moment, I’m a regular user; I don’t use the API (web browser) for AI, although I have an account. I only pay for ChatGPT Plus at the moment, but I can pay extra if I need more. On average, it costs tens of euros a month, so it’s not a significant expense. If you’re just starting out, I’d definitely recommend beginning with GPT – the basic version is free and provides a good introduction to the technology.
Michal: I agree. ChatGPT by OpenAI is the best start – it’s free, although it has limits. If one runs into limitations, one can pay $20 a month for advanced features.
However, I also use AI through the API – I’m writing a program that connects to ChatGPT. As I mentioned earlier, I’m building multiple models against each other to critically evaluate the outputs. If you want to access the latest OpenAI models via the API, you first have to spend $1,000 to get Tier 5 (long-term user) status – a level that allows you to use the latest versions of the models.
Interestingly, on January 20, the Chinese company DeepSeek released a new open model, achieving O1 results, which is currently one of the best models. Open means you can install it directly on your computer and don’t have to send data to external servers.
If it’s about using AI over APIs and even weaker models are enough, the prices are actually very low – you pay for tokens (the words that the model processes). It costs me about $5 a month, which is less than a ChatGPT Plus subscription.
Jakub: Then again, if convenience is the priority, paying $20 a month for hassle-free access on the app is still the better option. If the price went up to $100, however, I’d be considering my own solution. At the current price though, I think it’s worth it for the convenience.
Michal, you mentioned advanced features. Are they worth using? And what exactly do you use them for?
Michal: One of these is Advanced Voice Mode, which I’ve already mentioned. This feature allows me to talk to the AI in real time. Originally, it was only available in the paid version, but it has since been made available to a wider audience. However, there is still a limitation. If you want to brainstorm or have a natural conversation with the AI, it’s definitely worth checking out.
Another advanced benefit of the Plus version is access to the latest models. As soon as OpenAI releases a new model, subscribers get it first, while users of the free version wait longer. So if you want to always have the latest and greatest performance, it’s worth investing in the paid version.
If someone is considering using tools like ChatGPT, what should they know in order to get the most out of it? Are all versions paid? What should the user rather avoid? And how is the paid version better?
Jakub: ChatGPT is also free to use. It was even possible to try it without registration for a while, I don’t know if that is still valid. Basically, you just have to open the site, register in a moment, or use your Google account to log in.
GPT-4o is currently available on a limited basis. OpenAI does not disclose how many messages or tokens users of the free version can send. However, if you exceed your limit, the system will either switch you to GPT-3.5 or prompt you to try again later, when you will be able to use the full version again. If you need to write more, you can pay extra. If not, you won’t be charged. I like OpenAI’s philosophy that they’ll give everyone access when they develop something new. First to paying users, of course, but a limited version is released a few months later.
$20 a month for the Plus version is not a large sum. In my opinion, it’s a worthwhile investment because the technology is constantly evolving. What many people don’t realise is that OpenAI has received substantial backing from Microsoft, which has invested around $13 billion in the company in recent years. Because of this, many people don’t realise that OpenAI has significant development resources. In turn, Microsoft integrates these models into Bing, Copilot, and Bing Chat for search. It’s running GPT-4 Turbo there, which is not available directly through the ChatGPT site. Running Bing Chat gives you access to a more powerful version of the model than is normally available in the free version of ChatGPT.
As for the versions, all advanced AI models are paid for nowadays. I remember when OpenAI first released DALL-E for image generation. Initially, it was free, offering a limited number of tokens per month. You could generate 10 to 15 images; each token meant four images. It could generate quite abstract images based on text. Things that basically didn’t exist. For example, you could generate an image of a cat on a skateboard or a cat as an astronaut in space.
The O1 and O3 models that are emerging now have advanced logical thinking and planning capabilities. Another interesting development is the multimodal AI Sora, which combines text, images and videos. I’m testing its video generation capabilities, but I’m not yet fully satisfied with the results.
Everyone has to choose a model according to their needs. If I were starting from scratch with AI, ChatGPT would definitely be my first choice. Just sign up and give it a try! If the free version meets your needs, that’s great. Otherwise, you can always pay extra to access the advanced features.
What are the actual differences between the latest version and the previous one? Are they that different? We discussed the various models: ChatGPT is free, DALL-E is geared towards marketing and graphic design, and o1 is intended for science and mathematics, offering more advanced analytical capabilities. Is ChatGPT sufficient for the average user?
Jakub: Yes, that’s right — you can either use the free version or pay for a subscription. Personally, I don’t use DALL-E or Sora that often. I know Sora has some limitations within the subscription. For example, even as a paying user, you can’t generate videos of the highest quality — I had a limit of four seconds per video, as well as limits on the number of characters in the description and the total number of videos I could generate. This makes sense, as generating videos is much more computationally intensive than typing with ChatGPT.
When it comes to the differences, it’s all about the speed of the responses and the available features. If you exceed the GPT-4o limit, you will automatically be switched to o4-mini. This model is slightly slower and may have poorer response quality and limited features.
How exactly are those algorithms set up? Last time, I wondered when it would switch me from the higher version to the mini version. How does that work?
Jakub: This is exactly what Michal mentioned — it’s about tokens. The model counts how many tokens you consume, i.e. how many input and output words have been processed. When you reach your limit, it simply switches you to a weaker version. It’s not random; it’s not an algorithm that just ‘turns you off’. It simply informs you that you have used up the permitted number of tokens and that you can return in a few hours.
Michal: I would add to the o1 model – it is not just for research and math analysis. It has a function called chain of thought, which means that it thinks for some time before answering. The outputs are then more logical and of higher quality. For simple questions, it answers quickly, but if it is given a more complex task, it may think about it for a minute. So, in my opinion, it is also usable for ordinary users, not only for scientists and analysts.
Jakub: That’s a good point – I had completely forgotten about this! The interesting thing about the O1 is that when you give it a command, it doesn’t just show how long it has been thinking for. If you click on it, however, you can see the whole chain of thought, i.e. the sequence of its thoughts.
You see step by step how the model proceeded – what it considered, what options it evaluated, what variables it took into account, and why it ultimately came to a particular conclusion. So it’s not just a random answer generated or an artificially delayed response time. You can really see that it went through some logical process before it gave you a final answer.
How can IT professionals effectively use AI tools in companies to comply with security policies? What should they be most careful about?
Jakub: The basic rule for me is simple: no sensitive data, whether corporate or private. I think of AI as a database in which I store information, but I never send anything confidential. I always consider what I’m sending to the AI, whether it’s corporate or private data.
For example, when I solve a technical problem for a colleague, I never enter any sensitive data, such as a specific machine number. Instead, I describe the problem in general terms. “I have a problem like this: I found this solution, but I need some advice.’ This way, I can receive a response without compromising security.
The same applies online. Don’t post pictures on Facebook that you wouldn’t want someone to use against you 10 years from now. The same goes for AI — you need to think twice about what you post. Everyone should set their own boundaries, but it’s definitely a good idea to use AI sensibly and consider security.
Do not use the AI tool for any sensitive work or privacy data.
Michal: I would expand on this a little further: it’s not just personal or company data, but also information that could reveal how a company works, including its systems, architecture and strategic plans. Providing details to AI about how we plan something could backfire on us.
It is important to share as little specific information as possible. If we need to consult with AI about a problem, it is better to formulate it in general terms, without direct connection to our company or project.
If you don’t trust the models at all and don’t want your data to leave your computer, you can run them locally. However, running AI models quickly requires a powerful graphics card — ideally two with at least 45–55 GB of NVRAM.
Some of the open models you can install include Llama 3 or some vision models capable of analyzing images. Today, such a setup with the necessary power can be bought for around €1,000. If you are looking for a bargain, the NVIDIA RTX 3090 second-hand can be acquired for around €500. It still has the best price/performance ratio.
The AI model can be run without a graphics card, but then everything is done in RAM and on the CPU, which is extremely slow. There is also the possibility of a combination – part of the model can run in RAM and part on the GPU, but it’s still not ideal.
Jakub: The difference lies in the number of processors. GPUs have many more compute cores than regular CPUs. This is why graphics cards are essential for AI — they can handle parallel computations much faster.
Michal: Small models up to 10 GB of VRAM can be used, but their capabilities are quite limited. They work for simple tasks, but you can’t do much with them.
Have you ever used AI for something unusual? For example, in what ways has this tool pleasantly surprised you or even unpleasantly disappointed you?
Michal: I also use Chinese models, and it is fascinating how they work in image analysis. The model first extracts features from the image and then processes them. I recently came across an interesting situation – I was analyzing an English text. Since it was a Chinese model, it misinterpreted it as Chinese characters. This is an example of how training data affects AI performance. I often encounter surprises like this when developing these systems – they are not always pleasant.
Thank you, Michal and Jakub, for introducing us to the world of artificial intelligence today. If you could give the audience one piece of advice, what would it be?
Jakub: Don’t be afraid of AI! Many people are worried that AI will take their jobs. But let’s look at history: people used to plough with horses and ploughs; then tractors came along. The work didn’t disappear; just the tools changed. Rather than fearing AI, it’s better to learn how to use it to your advantage and add value. AI isn’t dangerous; it just needs to be understood and used wisely.
AI won’t take our jobs. Only the tools we use to do our jobs change with it.
Michal: AI agents are the future. This trend allows AI agents to communicate with each other and solve tasks together. We can model the dynamics of their collaboration, and I think this is the future – creating entire teams of machines to work on our behalf.
Dear listeners, today’s podcast was a real journey of discovery for me. I hope you found it interesting, and that we added a dash of humour while providing you with new insights. AI has the power to fascinate, surprise and amuse us. What is the message at the end? Don’t be afraid to try new technologies! The more you learn about them, the easier it will be to use them to your advantage. I look forward to seeing you on the next episode of Na vlne kodu.