Technology   //   November 8, 2023  ■  5 min read

AI Briefing: Here is how a startup named after a Black inventor is addressing AI’s racial bias

This briefing was first published on WorkLife’s sibling site Digiday.

As companies look to mitigate racial bias in AI, one startup is looking to fix the problem by building a new model trained on a more diverse set of cultural and historical data.

Latimer, named after the 19th-century Black inventor Lewis Latimer, is training a new large language model (LLM) to more accurately represent Black history and culture. According to Latimer founder and CEO John Pasmore, the idea came to him earlier this year when he noticed no other companies were approaching racial bias within LLMs as a technical issue to solve. 

“Not only was I seeing issues of bias, but there were inaccuracies in the history,” Pasmore said. “I have a 16-year-old, and the last thing you want is history to be baked incorrectly into what I consider the next round of [technology] just as big as search.”

Using OpenAI’s LLM as a foundation model, Latimer is training its AI platform using data from a range of sources such as books and newspapers from Black writers. By using a “RAG” approach — short for retrieval-augmented generation — Latimer is able to tap the power of an LLM like GPT-4 but train it to pull answers from a different set of information.

Latimer is partnering with with Historically Black Colleges and Universities to give students access to the technology and also potentially to include them as part of the AI training process. It’s also working with experts like Temple University professor Molefi Kete Asante, the author known as the founder of the theory of Afrocentricity. 

“He’s amazing,” Pasmore said. “He’s written 100 books. He’s got a small army of students working on his digital library for this that we’ll use, and we’re actually reaching out some of his former publishers of some of the textbooks that he’s created … He’s got so many speeches, and he himself is just an amazing repository of data for us.”

When explaining how this works, Pasmore gave the example of someone asking who built the pyramids. Instead of pulling answers from potentially problematic sources like social media or YouTube comments or tweets, Latimer will answer with more factual answers based on its different training data. The company is also looking to address other types of bias such as what kind of personal bias might have influenced an author writing a PhD dissertation.

As tech giants and startups alike work to mitigate AI bias, the issue also part of the White House’s agenda, included in President Joe Biden’s new AI executive order signed last week. When Digiday asked what he thought of the White House’s plan for AI, Pasmore described himself as “more of a kind of a free market person.” Rather than have the government intervene, he thinks there’s a “bigger risk of regulation stifling innovation than encouraging innovation at this point.”

“I think we’re an example of a correction and I’m sure there will be many others if there aren’t already,” Pasmore said. “I do think the market itself, especially if you’re talking to commercial customers, definitely does not want to have the added liability of irresponsible AI. So I think that anybody in the category is going to want to adhere to standards to deliver top-notch product.”

AI news:

  • Despite leading content authenticity efforts related to generative AI, Adobe was found to be selling dozens of fake AI stock images that appear to be from the Israel-Hamas war. The AI images — which portray scenes including a missile strike, a mother and child standing amidst the rubble and and a bombed out street — come just weeks after Adobe debuted a new “Content Credentials” label it plans to include on every piece of content generated by Adobe’s products. The images were first discovered by Crikey, an independent Australian news website.
  • Agentio, a Brooklyn-based AI startup, raised a $4.25 million seed round to help YouTubers sell ads and help advertisers find the right content creators. Along with automating the bidding process for 30- and 60-second YouTube video ads, the Brooklyn-based startup uses LLMs to analyze creators’ data and content as well as advertiser data like campaign briefs, brand voice, guidelines and historical performance data from previous creator partners.
  • In California, a federal judged dismissed much of an AI copyright lawsuit brought against Midjourney, Stability AI and DeviantArt.

Prompted products: AI debuts, updates, research and other announcements

  • Microsoft released its highly anticipated AI Copilot, which promises to give Microsoft 365 users an AI assistant for software like Word, Outlook and Excel. Copilot was first previewed in September at Microsoft’s annual Surface event in New York.
  • Noteworthy philanthropies are banding together to invest $200 million to mitigate the potential risks of AI. The group — which includes Mozilla Foundation and the Ford Foundation — want to mitigate potential risks of AI while still exploring the potential benefits.
  • Habu, the data clean room software provider, announced new generative AI tools that give users suggested insights, predict outcomes, implement new use cases and generate descriptions.
  • Meanwhile, Mozilla issued a joint statement on AI safety and called on others to sign a petition for “openness, transparency, and broad access.”
  • According to Forrester’s new report about predictions for 2024, 50% of U.S. adults and 43% of French adults surveyed said they thought generative AI could be a threat to society. However, 80% of enterprise respondents said they plan to add generative AI talent internally.