Why does the world look to kick the American AI models

A few weeks ago, when I was at the Digital Rights Conference RightScon at Tai -Wan, I watched in real time as organization of civil society from around the world, they include the US, closed by one of the largest donors of global work in the field of digital rights: the United States Government.

As I wrote in my expedition, the shocking, fast roofs of the American government (and pushing it into some prominent political scientists, “which they call” competitive authoritarianism “), it also affects the operation and policies of American technology companies – many of which) borders. People at RightSCon said they already see changes in the willingness of Grande of these companies to get involved and invested in communities that have smaller user bases-in-terms of accidents.

As a result, some politicians and company leaders in Europe, especially returning their reliance on American technology and ask where they can quickly spin, domestic alternatives. This is especially true for AI.

One of the brightest examples is in social media. Yasmin Curzi, Professor of Brazil’s law, who examines domestic technology policy, gave me this way: “The second administration of Trump sales cannot be done on (American social media platforms).

Social media moderation systems are already using automation and experimenting with the deployment of large language models on flag allowances-by detecting violence by gender in places that differed as India, South Africa and Brazil. If the platforms begin to rely even more on LLMS to moderate content, this problem is likely to gain, says Marlena Wisniak, a lawyer for human rights that focuses on the AI ​​ruler at the European Center for Non -Profit. “LLM are poorly mild and poorly moderated LLM are also used to alleviate other content,” he builds. “It’s so circular and just repeat and amplify the mistakes.”

The problem is that the system is trained primary data from the English -speaking world (and American English), and as a result they perform a well with local language and context.

Even multilingual language models to process multiple languages ​​at the same time still work badly with not western languages. For example, one CHATGPT rating for a health card found that the results were much worse in Chinese and Hindi, which are less well represented in North American data files than in English and Spanish.

For many people at RightSCon, this confirms their calls for other approaches to the community to Ai-payment in context and employment. They could include models of small languages, chatbots and data files designed for specific use and specific for specific languages ​​and cultural contexts. This system, which is trained to recognize slang uses and elevas, interpretation of words or phrases written in a mixture of languages ​​and even alphabets, and identifies the “regenerated language” (one -off slots that have decided to accept). All these tend to be wrong or incorrect categorized by language models and automated systems trained primary in Anglo -American English. For example, the founder of the Shhor AI startup hosted the panel on the right on the right and talked about his new API for moderating the content focused on Indian folk languages.

Leave a Comment

001win1