Meta will allow US government agencies and contractors in national security roles to use its Llama AI. The move relaxes Meta’s acceptable use policy restricting what others can do with the large language models it develops and brings Llama ever so slightly closer to the generally accepted definition of open-source AI.
However, the benefits are expected to remain largely confined to the US, with limited effect on CIO decisions in other regions, particularly Europe, where stringent regulations mean that credibility alone may not be enough to secure trust, according to Priya Bhalla, Practice Director at Everest Group.
“Trust in AI solutions across Europe and other regulated regions will hinge more on how well companies address concerns around data sovereignty, privacy, and compliance with local regulations, and simply having the endorsement of the US government may not be enough to win trust,” Bhalla said.
Read more at: CIO