Smiley face
Weather     Live Markets

The 2024 edition of the AI Index from Stanford University’s Institute for Human-Centered AI (HAI) reveals that while AI technology has the potential to revolutionize productivity growth, there are still limitations and ethical concerns surrounding its usage. The report highlights that AI has made significant advancements in tasks such as image classification and English understanding, but still struggles with more complex tasks like mathematics and planning.

Despite the growth and capabilities of AI over the past year, the costs of developing and maintaining large language models (LLMs) have skyrocketed. The report notes that new LLMs are being released at a rapid pace, with the highest-performing models coming from industry players with closed systems. The costs of training these models have also increased significantly, with estimates of millions of dollars necessary for projects like OpenAI’s GPT-4 and Google’s Gemini Ultra.

On the investment front, funding for generative AI has surged over the past year, reaching $25.2 billion. Major players in the space, including OpenAI, Anthropic, Hugging Face, and Inflection, have reported substantial fundraising rounds. However, there is a call for more transparency from AI developers, particularly in disclosing training data and methodologies. This lack of openness hinders the ability to understand the robustness and safety of AI systems, the report suggests.

Responsible AI remains an ongoing effort, with standardized evaluations for LLM responsibility lacking. The report highlights a lack of standardization in responsible AI reporting, with leading developers primarily testing their models against different benchmarks. This makes it difficult to compare the risks and limitations of various AI models. The increase in AI regulations in the United States reflects the growing concerns surrounding AI ethics and responsible usage.

Intellectual property and copyright violations have also emerged as a key issue in the AI space, as generative AI synthesizes information from various sources. Some researchers have found that outputs from popular LLMs may contain copyrighted material, raising questions about potential copyright violations. The report emphasizes the need for clarity on these legal questions surrounding generative AI outputs and copyright infringement. As the industry continues to evolve, addressing these issues will be crucial for ensuring the responsible and ethical use of AI technology.

Share.
© 2024 Globe Echo. All Rights Reserved.