GPT-4 & ChatGPT – all-rounder or parvenue?

Why the latest member of the GPT-4 family is not a revolution in content automation and when the Text Robot is better

ChatGPT Beitrag Teil 1

Allow me to introduce myself, my name is ChatGPT.

GPT-4: A star is born?

Yes, the GPT-4 language model has been released! And once again, the entire internet is in a frenzy. It seems that the new multimodal offspring of OpenAI has further developed its creative writing skills and is now also able to better understand longer contexts and visual elements. For now, GPT-4 will only be available in GPT-4 Plus paid mode and as an API for developers to build their own applications, with a waiting list for API access. Because of this gradual rollout, we will therefore nevertheless take a closer look at the capabilities of ChatGPT in this article.

GPT-4 vs ChatGPT

According to OpenAI CEO Sam Altman, GPT-4 differs only very slightly in its conversational capability from GPT-3.5 (an improved version of GPT-3), which is the model behind ChatGPT and should already be familiar to most users. So the architecture is unlikely to have been changed in any groundbreaking way, and the model will probably still have a humanoid, but far from a human-like neural network.

The ChatGPT language processing model is at the top of the GPT-3 models and was produced by the no-profit company OpenAI. Since its launch, it has caused a real furore online and offline. It creates texts, can summarise, translate or simplify them and uses deep learning to do so. In this article, we will show how this AI is a true linguistic genius and why it is not the right tool for content automation in all use cases. In the first part, we describe the history of the language model and how it functions. In the following two articles, we will look at its concrete capabilities and put it to the acid test using various use cases. Who will do better, ChatGPT or our Text Robot?


The non-profit organisation OpenAI was founded in 2015 by Elon Musk, Sam Altman and other investors in San Francisco. The ambitious OpenAI team sees itself as a blue-sky research laboratory on the possibilities of artificial intelligence and makes the results available to the public. The declared aim of OpenAI is to create an increasingly powerful, human-like intelligence that is capable of taking on more and more tasks. Conflicts of interest with other companies led Elon Musk in early 2018 to no longer co-lead OpenAI at the administrative level, but to act as a silent backer from the background.

In addition to GPT-1, GPT-2, GPT-3 and GPT-4, OpenAI has produced several AI products over the years. These include, among others:

OpenAI Gym

OpenAI Gym is a toolkit for the development and comparison of reinforcement learning algorithms

OpenAI Universe

OpenAI Universe is a software platform for measuring and training the general intelligence of an AI in the global range of games, websites and other applications

OpenAI Five

OpenAI Five is a system that plays the video game Dota 2 against itself

The GPT family

Generative pre-trained transformer (GPT) is a family of language models that are generally trained on a large corpus of text data to produce human-like text. They can all be fine-tuned for various natural language processing tasks such as text generation, language translation and text classification. The “pre-training” in the name refers to the initial training process on a large text corpus where the model learns to predict the next word in a passage. This creates a solid foundation for the model to perform well on downstream tasks with limited amounts of task-specific data.

The introduction of GPT has definitely taken the world of Artificial Intelligence into a new era, as this language transformer is a machine learning system whose algorithm is not trained for a specific task. Rather, natural language processing (NLG) is based on a multi-valued deep learning architecture that forms neural networks and imitates humans in this as well.


In 2018, OpenAI developed a general cross-task model (GPT-1) whose generative pre-training was performed using a book corpus. The model used unsupervised and semi-supervised learning. The original GPT study showed how a generative language model is able to acquire knowledge about the world and process long-range dependencies. In this basic model, the pre-training was done using unlabeled data, and then the model was optimised using examples for specific downstream tasks such as classification, sentiment analysis, textual conditions, and so on. Unsupervised learning served as a pre-training target for supervised, fine-tuned models, hence the name generative pre-training. More on the topic of “learning methods”.

Semi-supervised learning (unsupervised pre-training followed by supervised fine-tuning) for NLP tasks involves the following three components:
  1. Unsupervised language modelling (“pre-training”): A standard language model was used for unsupervised learning.
  2. Supervised fine-tuning: This part aims to maximise the probability of a given label given characteristics or tokens.
  3. Task-specific input conversions: In order to change the architecture of the model as little as possible during fine-tuning, the inputs for the specific downstream tasks were converted into ordered sequences.

GPT-1 performed better than specially trained supervised top models in 9 out of 12 tasks compared. Another important achievement of this model was its ability to recognise in the test what it had not yet seen during training (zero-shot learning).

The architecture of GPT-1 facilitated transfer learning and could perform various NLP tasks with very little fine-tuning. This model demonstrated the power of generative pre-training and opened up avenues for subsequent models to further exploit this potential with larger data sets and more parameters.


Subsequently, the GPT-1 model was further developed by OpenAI until GPT-2 was released in 2019, although the full version was not immediately released due to concerns about possible misuse (applications for writing fake news, etc.).

Some experts were very sceptical and feared that GPT-2 could pose a significant threat. They warned that this technology would completely clutter the web with sensible-sounding, context-appropriate text that would overpower every other form of text and be impossible to filter. GPT-2 is an unsupervised language model and therefore a general learner. The WebText corpus (plus the Reddit platform) on which it was trained contains over 8 million documents with a total of 40 gigabytes of text from the URLs.

The main change was to use a larger data set and add more parameters to the model to generate an even stronger language model. So a significantly larger data set with more parameters was used to create a more powerful language model (1.5 billion parameters, 10 times more than GPT-1 with its 117 million parameters). The result was that GPT-2 performed better because it simply had more data on which to steel and train its muscles. This allowed additional skills to be added to those already present in the predecessor GPT-1, and indeed the model performed better than GPT-1 on every downstream task (such as reading comprehension, summaries, translations, answering questions, etc.), while reducing the frequency of errors from 99.8 to 8.6 and significantly improving accuracy.

GPT-2 has shown that training on larger data sets and with more parameters improves the language model’s ability to understand tasks by a significant amount. This laid the foundation for GPT-3 and ChatGPT.


In its ambition to develop very strong and powerful language models that require no fine-tuning and few instructions to understand and perform tasks, OpenAI developed the GPT-3 model with 175 billion parameters. It was released in 2020 and has 100 times more parameters than GPT-2. Due to the large number of parameters and the extensive dataset on which GPT-3 was trained, it performs extremely well on downstream NLP tasks.

Thanks to its large capacity, GPT-3 can write articles that are difficult to distinguish from those written by humans. It can also spontaneously perform tasks for which it has never been explicitly trained, such as adding up numbers, writing SQL queries and codes, decoding words in a sentence, writing React and JavaScript codes from a natural language task description, etc. Microsoft was granted an exclusive licence for GPT-3 already in September 2020.




GPT Babbage


GPT Curie


GPT DaVinci



Basically, all these models differ in size. The larger the dimensions and the more parameters they can handle, the more tasks they can perform and the more skills they have, such as:


Answering questions based on existing knowledge.

Grammar correction Summaries Translates English into other languages Command texts Chats with friends
Corrects sentences in standard English. Translates difficult texts into simpler concepts. Translates English text into French, Spanish and Japanese… Translates text into programmatic commands. Simulates SMS conversations.

and so on…

ChatGPT is the top of the GPT-3 family. It is a chatbot that was launched in November 2022. After entering a prompt text, it will produce a text that continues the prompt. It was refined and reinforced with different learning methods.

ChatGPT has been pre-trained with a huge and diverse text corpus and hundreds of billions of words:

Chat-GPT3 – Chat-GPT3 holistischer Beitrag Teil1

Find out here what tokens are.

GPT-3 is trained with a combination of supervised and unsupervised learning to predict the next token based on the previous tokens.

So with ChatGPT we are already in the 3rd generation of GPT models. Das bedeutet, dass ChatGPT nicht wirklich eine Revolution in der Textautomatisierung darstellt. The introduction of GPT-4 is planned for 2023 – a model that will certainly feature a variety of other capabilities.


GPT-4 has been trained with 100 trillion parameters, compared to 175 billion parameters in GPT-3, allowing it to solve more complex tasks with higher accuracy and improved alignment.

Alignment is an improved coordination between user prompts and text output.

This means that GPT-4 can naturally cover a wide range of other applications. Most importantly, by training with such an enormous amount of data, the language model is able to better mimic natural language. However, OpenAI says it wants to focus on optimising the GPT models rather than growing and expanding them indefinitely. What is clear is that GPT-4 will probably work even more human-like than any previous GPT model.

The text output is also not scalable in GPT-4. Also, despite the many improvements, the model will continue to “hallucinate”. So although it has many capabilities, GPT-4 has the same limitations as its predecessors. It is not completely reliable, it invents facts and commits errors of reasoning.

The most important difference to previous models is probably the possibility to process images as input. His Deep Learning was also designed with this in mind. For example, you can prompt a photo, a graphic or a screenshot and GPT-4 can recognise and describe in words what is in the photo.

As with GPT-3, however, there are limitations, because GPT-4 was also only fed with data up to the year 2021 – this means its knowledge of the world ends in September 2021. What happened afterwards does not belong to its universe and will therefore not be included in the outputs.


By loading the video, you agree to Vimeo's privacy policy.
Learn more

Load video

Learning methods

As we have learned before, the different GPT models use different learning methods. Here follows a short description:

Supervised machine learning

In supervised machine learning, all data in the dataset is labelled and the algorithms learn to predict the output based on the input data.

Labelled data are commonly data for which there is a prior understanding of how the world works. A human or automatic tagger must use its prior knowledge to add additional information to the data. Typical examples of labelled data are: A picture of a cat or a dog, with an associated label “cat” or “dog”.

Unsupervised machine learning

In unsupervised machine learning, all data in the dataset is unlabelled. In this case, the algorithms learn to recognise an inherent structure from the input data.

What is unlabeled data? This is data in its pure form. When we flip a switch or open our eyes and know nothing about our surroundings or how the world works, we collect unlabeled data. Nor do we need to know anything about how this world works.

There is very little or no prior knowledge associated with unlabeled data.

Semi-supervised learning

With this learning method, some of the dataset’s data is labeled, but most of it is typically unlabeled. This results in a mixture of supervised and unsupervised methods. For cost reasons, a small amount of labeled data and a large amount of unlabeled data is usually used for training.

Reinforcement Machine Learning

Reinforcement learning is about learning the best strategy. The algorithm recognises through trial and error which actions are most rewarded. The system should select actions that result in the greatest possible reward within a specified time frame.

In this mode, therefore, the ideal behaviour in a particular context should be determined as far as possible in order to maximise the desired performance.

“Wrong” learning and toxic language

The training data occasionally contains toxic language, and GPT-3 occasionally generates toxic language as a result of mimicking its training data, including misinformation, spam, phishing, etc.

OpenAI has taken various measures to reduce the amount of unwanted (toxic) content generated by GPT-3.

Compared to its predecessors GPT-1 and 2, GPT-3 generates less toxic content thanks to these measures.

Soon in 2021, OpenAI announced that sufficient security measures were in place to allow unrestricted access to its API.


What learning method does ChatGPT use?

ChatGPT uses both supervised learning and reinforcement learning. Both approaches use human trainers to improve the performance of the model using “reward models” and a proximal policy optimisation (PPO) algorithm. In addition, OpenAI continues to collect data from ChatGPT users that can be used for further training and fine-tuning.

Users can, for example, upgrade or downgrade the responses they receive from ChatGPT; they can also fill in a text field with additional feedback.

ChatGPT is trained to predict what the next token will be based on previous tokens.

What are tokens?

Tokens are the result of a process called tokenisation. It consists of the delimitation and eventual classification of sections of a series (string) of input characters.

In the GPT models, Natural Language Processing (NLP) language comprehension was improved through a process of generative pre-training on a diverse corpus of labelled and unlabelled text consisting of tokens.


OpenAI’s GPT family represents a remarkable advance in Artificial Intelligence and NLG technologies. ChatGPT in particular offers many possibilities for texting and human-like communication. In this article, we have explored how the language model came into being and how it knows what it knows.

In the following articles, we will delve a little deeper into the matter and find out what specific capabilities it has and where it reaches its limits. Finally, we will use various use cases to test whether and when ChatGPT or our Text Robot performs better.


ChatGPT in the age of GPT-4 – the real capabilities of a ChatBot

Having looked in depth at the genesis and learning methods of OpenAI’s language models GPT in the last article, in this one we will look at the real capabilities of ChatGPT. However, we already refer to another blog post that we will dedicate exclusively to GPT-4. Exactly how helpful is ChatGPT in completing various tasks such as texting, dialogue formation, summarising, etc.?

What is ChatGPT capable of?

The ChatGPT language model is a sub-model of the GPT-3 language model, which means that ChatGPT basically works in the same way as GPT-3. It is also based on a transformer architecture and is trained in an unsupervised way. Much like its progenitor, ChatGPT works by predicting the next token in an arbitrary sequence of tokens.

The idea behind all GPT language models and the core of OpenAI’s work in language generation research is exactly this: Given a large corpus of data, the model learns by reading that data (text), and when it sees something similar, it anticipates what to output.

The large language model uses this knowledge to generate human-like text or humanoid responses. Which the ChatBot also does very well, admittedly. What clearly sets it apart from other chatbots is its amazing ability to have a consistent and precise conversation. When GPT-3 was first published in 2020, it was indeed the largest neural network in existence, especially in terms of density. With ChatGPT you can have long, deep conversations.

If you strip away the hype, ChatGPT is a very sophisticated ChatBot in its main function. It performs much better than all other existing ChatBots. In contrast to previous language models, the GPTs from version GPT-2 onwards are multitasking-capable for the first time. In this sense, ChatGPT actually has the opportunity to revolutionise the entire ChatBot market.

Some examples of use have already been listed in Part 1 of this guide.

It should be noted here that ChatGPT shines particularly well with the following rational capabilities:

    Rationale Fähigkeiten - ChatGPT Beitrag

    He is also an excellent storyteller, which makes him particularly suitable for the following tasks, among others:

    Geschichtenerzählen - ChatGPT Beitrag - 1

    Increased production

    Ultimately, these skills facilitated communication between humans and computers. Instead of learning a complex system of codes and software solutions, you can ask or request something from ChatGPT in your own words – and get it instantly. All in all, ChatGPT offers some amazing possibilities in terms of natural language processing (NLP)) and can significantly help to increase your productivity. Let’s go into this in a little more detail:

    Automation of simple tasks and workflows

    One of the basic uses of ChatGPT is to perform routine tasks and recurring workflows in your daily work. It can help you organise your schedule by scheduling meetings, answering messages and reminding you of deadlines. This way you can keep track of your tasks without having to worry about your appointments or your full email inbox.

    This can increase your productivity as it can save you time that you would otherwise have to spend doing these tasks. You can also train ChatGPT to answer different messages, such as emails, social media comments or reviews.

    Creation of viral content for social media

    ChatGPT can be used to create social media posts and entire discussion groups that are likely to be well received by your readers. For example, because the ChatBot has been trained with huge amounts of data from Twitter, it knows the best phrases for tweets and even frequent responses to different types of posts. The programme can not only create new content for social media, but also rewrite existing content to achieve greater audience coverage.

    Content ideas

    ChatGPT is ideal for generating new content ideas. Content creators, for example, always need new ideas for their customers in order to get more traffic and engagement. In such cases, all they have to do is enter the customer’s requirements into ChatGPT and the ChatBot will brainstorm for them.

    If you already have a finished blog post, ChatGPT will help you find a catchy title. Or are you perhaps even a writer and already have a few ideas for a new novel, but no concrete story yet? ChatGPT can help you write your next bestseller.


    Prompt: “Develop a plot for a romance novel that features a cook who always has the hiccups. Everything has to be very romantic and always take place during the night.”

    ChatGPT answers this prompt as follows:

    The story revolves around Emma, a talented cook in an elegant restaurant in the city. Emma has a little secret – she has had the hiccups regularly since she was a child, and it usually happens at the most inopportune moments.
    One night, while Emma is working in the restaurant, a handsome guest named Alex enters. Emma can’t take her eyes off him, but when she tries to serve him his order, she gets a bad hiccup that just won’t stop. Alex, noticing that Emma is struggling, offers to help her and holds her gently until her hiccups finally pass.
    Emma is relieved and feels grateful for Alex’s help. In the following weeks, Alex visits the restaurant regularly, always at the same time as Emma is working. He sits at their table and they talk about food, travel and life in general. Emma begins to fall in love with Alex, but she doesn’t know if he reciprocates her feelings.
    One night, Emma decides to lead Alex to her secret favourite place – a hidden roof garden with a breathtaking view of the city. There they share a bottle of wine and enjoy the view. Suddenly Emma gets bad hiccups again, and this time it’s so bad that she can’t breathe. Alex holds her again and this time he kisses her gently as he helps her calm down.
    After this special moment, Emma knows that Alex reciprocates her feelings. They spend the rest of the night together and talk about everything they missed. Emma realises that she has been searching all along for someone who would accept her as she is, and that Alex is that person.
    In the following weeks, Emma and Alex spend every night together, always at a different romantic location in the city. Emma becomes more certain of her love for Alex and happier. One day Alex asks Emma if she would like to start a new life with him, and she says yes. Together they leave the city and start a new life as travel bloggers and food bloggers, always looking for the next romantic adventure and the best meal they can find.

    A little cheesy, perhaps, but on the whole not bad at all, right?


    By loading the video, you agree to Vimeo's privacy policy.
    Learn more

    Load video

    Goal-oriented articles and blog posts

    The more precise your instructions are, the more likely it is that the ChatBot will create a text that captivates the reader and also contains, for example, a call-to-action. Thanks to the billions of texts that have trained it, ChatGPT knows how such articles, blog posts and reports need to look in order to perform well.


    ChatGPT is truly revolutionary in this respect. It can also be used as a programming assistant. With its help, programmers can find the most elegant way to program an application, write the code and detect errors. Professionals have confirmed that ChatGPT is invaluable when it comes to debugging codes. In addition, the programme can also create more complex codes – such as a complete WordPress plugin.

    Simplification of complex concepts

    Do you sometimes feel overwhelmed by complex concepts for which there seem to be no simple answers? When every explanation raises more questions than it answers, ChatGPT is there for you. He explains complicated concepts, processes and phenomena in a way that is easy to understand – and remains charming at the same time.

    What can ChatGPT not do?

    If you’re perhaps a bit technophobic or just not that familiar with artificial intelligence – you might be thinking that this all sounds too good to be true. To a certain extent you are right. While ChatGPT can undoubtedly do some amazing things, it also has some limitations. When using this ChatBot, it is important to be aware of these limits.

    Outdated training data set

    What does that mean? Although the pre-training dataset has been updated with GPT-4, the world for ChatGPT ends in 2021. Contrary to all assumptions, the training corpus of the system does not have direct access to the internet, as Google or other search engines do. This means that ChatGPT is limited to 2021 in terms of the data it uses. It has no up-to-date information, and everything it outputs is based on data fed into it in 2021. Therefore, there will be no applicable answers to questions that require more recent information.

    Research phase and false information

    ChatGPT-Research Privew - ChatGPT Beitrag

    When you use ChatGPT, the first information you receive from OpenAI is that the chatbot is still in a research phase. You may be given answers that sound plausible but are wrong or nonsensical. Therefore, always be aware that you are interacting with a chatbot in the research phase and consider the responses with a certain caution and scepticism.

    The hybris

    A big limit of ChatGPT is its hybris, because it simply cannot admit not knowing something. It is and remains a probabilistic system and whenever a gap occurs, ChatGPT tries to imagine which token could be next. In these cases, it will insert the next possible token, even if it is not correct. By doing so, he passes on false information without being aware of it. This is what we call “hallucinating” or ” fabulating”. It is therefore advisable in any case to proofread all response texts again by hand.


    Who holds the world record for the longest unaccompanied one-way ski expedition by a woman?

    Screenshot Rekord - ChatGPT Beitrag

    The answer contains many errors if you compare the issue with the correct news from the newspaper “Rheinische Post”. This means that you can never know for sure whether the text you generate is reliable or not. This can be particularly problematic with texts for companies.

    Problematic service

    The service was initially made available free of charge to the general public, with the aim of monetising it at a later stage. As a result, there are often bottlenecks due to congestion and overuse, and the service still breaks down occasionally. In addition, ChatGPT runs best in English, with different results in other languages.

    Is at capacits right now - ChatGPT Beitrag

    However, OpenAI has now introduced a subscription service called ChatGPT Plus for 20 US dollars / month. It gives users uninterrupted access to ChatGPT, faster response times and earlier access to new features.

    Systematic errors and information processing errors

    It is clearly evident that ChatGPT often provides illogical answers to mathematical operations and simple logical puzzles. Critics have also pointed out that the programme adopts the biases of the texts it has been trained on when it comes to the issues of gender, race, religion, etc. This is of course due to distorted training data, which ChatGPT simply takes over by mimicking.

    Although different filtering functions have been introduced, toxic language is still a problematic aspect of the system.

    Ethical concerns

    ChatGPT is certainly a remarkable technology that captivates us all because there are endless possibilities to use it in a meaningful way. However, it also harbours some dangers and can certainly cause damage.

    For example, there is the question of sustainability, both in terms of energy consumption and financial burden. It is currently estimated that ChatGPT costs about USD 3 million per day to operate. The energy consumption and environmental impact are also considerable, as the pre-training of GPT-3 alone left roughly the same carbon footprint as “driving a car to the moon and back”.

    Time magazine also revealed an abuse scandal:

    OpenAI used outsourced Kenyan workers to build a security system against toxic pre-training content and to label it. Using these labels, the model was then trained to recognise and avoid such content in the future. This content included topics such as sexual abuse, violence, racism, sexism, making weapons, etc. …

    The staff who were deployed received very low pay for their work and found the toxic content to which they were exposed to a veritable “torture”.


    Although ChatGPT opens up many new possibilities, it is important to recognise and work around the limitations of this heuristic technology. ChatGPT, for example, cannot do anything that needs to be up to date. In this respect, the Text Robot represents a more reliable system that is deterministically rule-based and can also be fed with brand-new, specific data. The text robot automatically generates texts from them, but unlike ChatGPT, these will never be invented or false. In contrast, it is advisable to always (!) proofread texts created by ChatGPT yourself.

    But we will go into the individual use cases in the third and final part of this holistic article on ChatGPT. We will we will also takeacloserlookattheapplicationpossibilitiesof the newborn GPT4 ..

    GPT-4 & ChatGPT vs. Text Robot: For which areas of application is which language model better suited?

    After we have dealt with the genesis of the various GPT models and their real capabilities in the first two parts of this guide, we want to devote this part to the various areas of application. You’re probably asking yourself all the time which of the two approaches is the right one for your use case: the GPT AI or our Text Robot? And how do the two models differ from each other? This is precisely the question we will address in more detail in this article. Since, as already mentioned, GPT-4 is for the time being only available in the pay mode GPT-4 Plus and as an API for developers to build their own applications (there is a waiting list for API access), we will intentionally explain the application possibilities of ChatGPT in this article and compare them with those of the Text Robot.

    GPT-4 or ChatGPT

    OpenAI’s GPT-4 and ChatGPT are both customisable NLP models that use machine learning to provide natural-sounding answers to questions. GPT-4 allows users to generate a variety of text types, while ChatGPT is specifically designed to have conversations.

    Like the chatbot ChatGPT, GPT-4 also creates texts, can summarise, translate or simplify them and uses the process of deep learning to do so. We will show to what extent this language genius is not the right tool for content automation for all use cases.


    GPT-4, ChatGPT, Text Roboter – What are the differences?

    GPT-4, ChatGPT and our Text Robot (data-to-text) are all Natural Language Generation (NLG) technologies that make it possible to generate text in natural language. Although all of these models are based on the same technology, they differ significantly from one another in a number of points.

    GPT-4 differs very little in its conversational ability from GPT-3, which is the model behind ChatGPT. It is also a probabilistic language model that is pre-trained using a predetermined data set.

    The Text Robot, on the other hand, works according to a deterministic data-to-text concept and refers to the automated creation of NLG texts using a specific, structured data set.

    What is structured data?

    These are attributes that can be neatly presented in tabular form and contain valuable information that can flow into the texts.

    Examples of this are product data from a PIM system, statistics from a football game or data on a hotel location.

    If you want to delve deeper into the structured data argument, then you should read our Holistic Content Data Management article.

    Users always retain total control over the output and have the possibility to change, adapt or update the text at any time. So you can always make sure that it has the tone, style and diction you want. In this way, the consistency, expressiveness and quality of your texts are uniformly maintained. With full control, texts can sound exactly the way you want them to. Moreover, unlike those generated by GPT models, they are always correct in form and content. They can also be personalised and scaled.

    Unique selling point: Languages

    The Text Robot automatically generates texts that can be played out in up to 110 languages on request. The texts are not simply translated, but rather localised to mother tongue level. GPT models, on the other hand, create individual texts in individual languages. These must always be checked for correctness by humans afterwards.


    Another fundamental difference between the two tools is the scalability of texts. Tools like our text robot, which are based on structured data, can create hundreds of texts on products with variable details in just a few seconds. And since every text is produced uniquely, you always have enough content for all your output channels.

    ChatGPT and GPT-4 can only be used to generate individual texts. This can be done quickly, but the users have no influence on the content. Multilingualism is only supported in a limited number of languages so far.

    In which use cases are GPT models suitable and in which is the Text Robot suitable for generating texts?

    GPT models are ideal for brainstorming or as a basis for a text, e.g. a blog post. Data-to-text software, on the other hand, is best suited for companies with high demand for text content that contains a lot of variable details.

    Text Robot

    Data-to-text is suitable for use in many industries, including e-commerce, banking, finance, pharmaceuticals, media and publishing.


    The key advantage is that the Text Robot can generate high-quality descriptions for many products with similar details in different languages based on the data provided by the customers. This not only reduces costs and time, but also increases SEO relevance and conversion rates on the product pages at the same time. It is almost impossible to manually write large amounts of text, such as thousands of product descriptions for an online shop. All the more so if they have to be updated regularly, for example due to seasonal influences.

    This is where the Text Robot is very useful. Once the project is set up, only the data needs to be updated. This automatically adapts the existing text or creates new and unique texts. This is particularly interesting for pharmaceutical and financial companies, as they can automatically create a wide variety of texts such as reports, analyses, etc. from the data or statistics.

    Pleasant side effect: More time for creative writing

    Automated text creation frees up time for creative or conceptual work.

    ChatGPT and GPT-4

    GPT tools, on the other hand, help with brainstorming and are also a formidable source of inspiration. Whenever it comes to finding new, brilliant ideas, these language models are simply perfect. They work quickly, coordinate inputs in the blink of an eye and, depending on the topic, deliver amazingly well-written, imaginative texts. For this reason, their use can be well worthwhile despite the potential for errors. For example, every time a long text is to be generated from a small prompt (input). Or if there is simply no one who can write, or if it is simply not worth writing texts by hand for reasons of cost or time. One example of this is answering recurring customer enquiries.

    Who should use ChatGPT or GPT-4?

    These language models are always useful when individual, inventive texts need to be created quickly and cheaply, such as in storytelling. Bloggers and shops/institutions with a low text requirement will therefore feel particularly addressed. But marketing managers and social media managers can also use these tools profitably. Blog posts in particular usually deal with frequently changing topics. Their small number simply does not justify the great effort required for the initial setup of the Text Robot. GPT models are also suitable for all those who have sufficient capacity for post-editing the texts so that quality and meaningfulness are always guaranteed.

    For whom is the Text Robot suitable?

    It is suitable for anyone who needs large amounts of content for the same type of text. But even those who want to internationalise their content quickly and ensure that the branding is consistent should rather reach for the Text Robot. Data-to-Text is oriented to user reality through the input of individual data, which stands for 100% precision and the absolute exclusion of false statements. Anyone who offers thousands of items in their online shop simply needs scalable texts that can be automatically generated over and over again in a flawless manner. One mouse click is enough and you have up to 500,000 fresh texts ready for your various channels.

    This is not magic. This is imaginative, rule- and mapping-based programming work!

    The Data-to-Text option is suitable for generating large amounts of similar content with variable details to be generated based on structured data sets. GPT models, on the other hand, are suitable when content needs to be generated without a data context. In order to determine which system is suitable for which industry and application, it is therefore necessary to consider the needs of the respective projects.

    What are the requirements of your text project?

    • Do you have a lot of content and do you want to update it at a click ?
    • Is control over content quality important to you?
    • Do you have to personalise texts, e.g. in addressing people?
    • Do you need your different texts in different languages at the same time?
    • Must your texts never contain false statements or errors of any kind?
    • Are you not allowed to have duplicate content?
    • Ethically questionable statements are an absolute no-go for you?
    • You don’t want to have to proofread your texts?
    • You want new statements with explanatory added value in your texts?
    • You want to keep the tonality and style consistent?
    • You want SEO-optimised texts to increase traffic?

    This mostly applies to the following sectors and areas:

    • Large online shops with product and category descriptions
    • Newsletter/News Stream
    • Travel offers & hotel descriptions
    • Reporting for healthcare, pharmaceuticals, finance, accounting, stock exchanges
    • SEO-optimised landing pages
    • Publishers for weather reports, horoscopes, sports coverage
    • Politics, election results
    • Real estate industry, property descriptions
    • Emotional / seasonal texts for almost every industry

    and much more

    For which projects are ChatGPT & GPT-4 useful?

    • Are you in charge of storytelling?
    • Do you want to write blog posts?
    • Do you need content for your social media?
    • Do you want to chat or do you have to manage dialogues or answer questions?
    • Do you need website templates?
    • Do you have to fill in tables?
    • Do you need codes?
    • Do you only have continuous text as input, but no data?
    • You don’t have structured data?
    • You want to use pictures as prompts?
    • Do you want to create individual texts quickly and inexpensively?
    • Are you looking for inspiration when writing?
    • You need your text in a single language?
    • You want to avoid the first setup?
    • You need little text based on little content input?
    • You want the most probable output, which does not necessarily have to be 100% correct?
    • You want to generate content from videos, music and videos

    What should be considered?

    Whenever you need to highlight and emphasise certain features in thousands of texts, it is better to consider the Text Robot. You should also do this if you want your text output to scale quickly (for example, in the form of high-quality product descriptions for hundreds or thousands of products – and in different languages, too).

    Unless it is possible to have a human write a text and repetition in longer texts is OK, on the other hand, you should use ChatGPT or GPT-4. This is also useful if there are no resources available for editing and fact-checking. Also note that the text robot always generates SEO-optimised texts, whereas this aspect tends to be left out of the GPT models.

    Multimodality for the extension of creative writing

    While GPT-4 remains a pure language model that will continue to produce only text, it now also understands images, videos and even music. What does that mean? This means that this AI can create text in multimodal form even after the input of non-text by other supports. Even speech-to-text is possible, i.e. the processing of spoken language. This allows telephone calls to be recorded, the content of which no longer has to be summarised and typed in manually.

    Text to Videos at the Text Robot

    And what is happening in this respect with the Text Robot? Quite a lot, because customers who produce thousands of product descriptions from their inventory data via an API call can also produce thousands of product videos via the API call. These consist of many images with sound, voice and the product description along with short focused bullet point descriptions within the image.

    The scaling effect is immense, e.g. when introducing new campaigns.

    Customers can present their products not only on their own platform, but also on all other platforms where their products are sold. This is a multimedia solution that can have a positive effect on Google ranking, as all content in the video is searchable and thus SEO-relevant.


    GPT-4 and ChatGPT definitely write good, humanoid texts – no question. However, it is important to be aware of what kind of text you want and how much personal work you want to put into the result. The GPT models use data from the internet and produce the most probable output from it. Since they do not know how the world works, they can neither understand nor evaluate the result. Are you sure you can be happy with the supposedly “most likely” output? Or is this limit not a little too restrictive after all?