
We live in an age of prevalent misinformation where even trained professionals can find themselves duped into believing AI-based content is true. The fast pace of technological advancements in the past decade has contributed to a pervasive sense of confusion, suspicion, and cynicism that often can leave us questioning what's real both online and what we see right in front of us.
We want you to know we understand that, and to know that the trust we have sought to earn from you for nearly 40 years is as important to us today as when we launched in 1986. And it's why we are adopting an official artificial intelligence policy to serve as a guide for our reporters and to inform our readers about how we intend to use and/or interact with the burgeoning technology.
As AI-based platforms have gained popularity, our newsroom has been working diligently to reconcile all of the ways in which the technology can, and is, changing the way reporters do their jobs and the efforts we take to maintain the trust of our readers.
We acknowledge that some AI platforms can be helpful to compile and sort information for data journalism projects, researching assignments, and transcription services, and we are encouraging our staff to become more familiar with the technology as a way to improve efficiency. However, we are committed to limiting the use of AI tools to those areas and we won't use AI in the creation of any content we publish.
Some major newsrooms and news services are using the product of generative AI in their publications, using it to generate corporate earnings articles or in the creation of sports scoring reports, for example. We don't. It's important to note that AI-generated content has and does sometimes "hallucinate" or make up facts, so at the Journal, we've decided that if our newsroom utilizes an AI platform for research and/or transcription purposes, we are obligated to verify the information provided by the platform is accurate.
Our internal guidelines also are necessary to protect our content from being misused by AI companies. To safeguard the Journal's hard-earned content, and to ensure that human voices remain a priority in the newsroom, we've decided that we won’t give AI platforms any of the Journal’s privileged or proprietary information, so as not to give away the very thing that makes the Journal what it is, and this profession stand out.
Aside from research assistance, our newsroom sometimes will engage with AI tools for design inspiration or conceptual illustrations, in which case we’ll publicly disclose its use when we use such art.
These policies will be updated in the future as technology advances and new best practices emerge.
AI is a promising technology, but one that, at least for now, must have guardrails. So we just wanted to let you know, in this space, that our newsroom won't use such tools to create the news and information we provide to our readers. As good as it is at creating some efficiencies in some of the processes journalists perform, it's simply not worth your trust to use it to create written content.
Informing and educating the business community and its partners, connecting advertisers to customers, and bringing leaders together remains our No. 1 priority. How we do that matters.