Do your worst, terrifying neural network

OpenAI’s massive text-generating language model, which was whispered to be too dangerous to release, has finally been published in full after the research lab concluded it has “seen no strong evidence of misuse so far.”

The model, known as GPT-2, was announced back in February.

At the time, only a partial version of the material was made public as it was deemed potentially too harmful to unleash: it was feared the technology could be abused to rapidly and automatically churn out large amounts of semi-convincing-looking fake news articles, phishing and spam emails, bogus blog posts, and so on.

When The Register was privately given access to GPT-2 to test it, we found that…

...it did not actually contain any malicious code at all.

The text above is a summary, you can read full article here.