OpenAi’s Generative Pre-trained Transformer 3 or GPT-3 has gotten a lot of buzz in the past few years. It’s basically an AI system using Natural Language Processing (NLP) to generate “human-like text.” You might remember how GPT-2 was so good that it was
too dangerous to share and it generated a fair
bit of controversy to boot. Not exclusively of interest to BI managers or analysts, but most companies are involved in blogging, writing whitepapers, creating presentations, maybe even writing… poetry.
Some argue that
GPT-3 can code, too – though any serious developer is likely to see it as an exercise in grins and giggles. And perhaps a few surprises. It may be unfair to write it off entirely though, as GPT-3 has been integrated in
Microsoft’s Power Apps providing “citizen developers” with a low code environment.
Companies like
Bloomberg and
Yahoo actively use NLP in content creation for turning out thousands of pages. There are a variety of AI-Text generators available as SaaS products, to be sure. As we’re increasingly engaged in the development of AI for Gitential, we naturally wondered how useful GPT-3 or other NLP programs might be for us. Turns out – not very, but…
GPT-3 and other AI-writers can be surprisingly good with fairly basic, non-technical content. Emphasis is on can be, it is hit or miss on the best of occasions. Spell-checking, grammar and punctuation is quite good, but human editing is still required. It can generate “facts” that need to be validated
Just a few months ago,
ArticleForge announced its 3.0 release, though at the time it was taking about 10 minutes to process each query. And then, only about 20-25% of the total content generated was useable with light editing, but… it did raise my eyebrow with the following example.