Australia Unveils New AI Guidelines with Potential for Future Regulation

The Rise and Impact of Artificial Intelligence Transforming the Future

Key Points

  • Australia introduced 10 voluntary AI guidelines focusing on human intervention and transparency, with potential future regulations for high-risk scenarios.
  • A one-month consultation will determine if these guidelines should become mandatory in certain settings.
  • The guidelines come amid global concerns over misinformation from AI tools, contrasting with stricter EU regulations.
  • AI is expected to generate up to 200,000 jobs in Australia by 2030, highlighting the need for responsible AI development and use.

Australia’s center-left government announced plans on Thursday to introduce targeted artificial intelligence (AI) regulations, focusing on human oversight and transparency, amid businesses’ rapid adoption of AI technologies and daily life. Industry and Science Minister Ed Husic revealed 10 new voluntary AI guidelines and initiated a one-month consultation to consider whether these guidelines should become mandatory in high-risk settings.

“Australians know AI can do great things, but they also want assurances that protections are in place if things go wrong,” Husic said. “We’ve heard the calls for stronger AI protections, and we’re responding.”

The newly introduced guidelines emphasize the necessity of human intervention throughout an AI system’s lifecycle to prevent unintended consequences. “Meaningful human oversight will allow intervention when needed, reducing the risk of harm,” the report outlining the guidelines stated. It also called for companies to be transparent about AI’s role when generating content, ensuring users are informed about AI involvement.

Global regulators have raised alarms about spreading misinformation and fake news due to AI tools, especially as generative AI systems like Microsoft-backed OpenAI’s ChatGPT and Google’s Gemini gain popularity. In response, the European Union passed landmark AI legislation in May, enforcing strict transparency requirements on high-risk AI systems, contrasting with the more lenient voluntary compliance seen in other nations.

“We believe the era of self-regulation for AI has ended. We’ve crossed that line,” Husic told ABC News, highlighting the need for more robust regulatory measures.

Australia lacks specific AI regulations, relying instead on eight voluntary principles introduced in 2019 for responsible AI use. However, a government report earlier this year concluded that these measures are insufficient for managing high-risk scenarios. Husic noted that only a third of businesses using AI are implementing it responsibly based on safety, fairness, accountability, and transparency criteria.

With AI expected to create up to 200,000 jobs in Australia by 2030, Husic emphasized equipping Australian businesses to develop and utilize AI technologies responsibly.

EDITORIAL TEAM
EDITORIAL TEAM
TechGolly editorial team led by Al Mahmud Al Mamun. He worked as an Editor-in-Chief at a world-leading professional research Magazine. Rasel Hossain and Enamul Kabir are supporting as Managing Editor. Our team is intercorporate with technologists, researchers, and technology writers. We have substantial knowledge and background in Information Technology (IT), Artificial Intelligence (AI), and Embedded Technology.

Read More

We are highly passionate and dedicated to delivering our readers the latest information and insights into technology innovation and trends. Our mission is to help understand industry professionals and enthusiasts about the complexities of technology and the latest advancements.

Visits Count

Last month: 86272
This month: 62226 🟢Running

Company

Contact Us

Follow Us

TECHNOLOGY ARTICLES

SERVICES

COMPANY

CONTACT US

FOLLOW US