OpenAi launches Bug Bounty, up to 20 thousand dollars for those who find an error in ChatGpt

OpenAi launches Bug Bounty, up to 20 thousand dollars for those who find an error in ChatGpt

[ad_1]

OpenAI, grilled in the world, would now do everything to demonstrate that it is willing to create a safe artificial intelligence, aimed at the good of humanity. It is in this context that it has just launched a bounty program: up to 20,000 dollars, starting at 200 dollars, for anyone who finds a bug in ChatGpt.

He writes in his blog: «The OpenAI Bug Bounty program is a way to recognize and reward the valuable insights of security researchers that help keep our technology and our company safe. We encourage you to report any vulnerabilities, bugs or security holes you discover in our systems. By sharing your findings, you will play a crucial role in making our technology safer for everyone.”

Why a Bug Bounty?

“To incentivize testing and as a token of appreciation, we will offer cash rewards based on the severity and impact of reported issues. Rewards range from $200 for low-gravity discoveries up to $20,000 for exceptional discoveries. We recognize the importance of your contributions and are committed to recognizing your efforts.” For this purpose, OpenAi has partnered with Bugcrowd, a leading platform in the bug bounty sector, to manage the process of sending reports and rewards. However, to confirm what the underlying intentions are, the OpenAi page dedicated to the bounty program opens with the paragraph “Our commitment to secure artificial intelligence”. “OpenAI’s mission is to create artificial intelligence systems that benefit everyone. To that end, we invest heavily in research and engineering to ensure our AI systems are safe and secure. However, as with any complex technology, we are aware that vulnerabilities and flaws can emerge», explains the company.

Not just bounties

«Are you interested in contributing further? We’re hiring – explore open security roles on our careers page. Join us to ensure that the frontier of technology is safe”. OpenAi, just in the days in which it was accused by our Privacy Guarantor, published articles to reassure the world. On the one hand, he reiterates that the basic objective – very ambitious – is to go beyond the current ChatGpt and arrive at an “artificial general intelligence (AGI)”, therefore truly capable of reproducing all the characteristics of human intelligence (but with the added benefit of super speed and nearly infinite memory). On the other, he says he wants AGI aligned with human values ​​and able to follow human intentions. In short, not an out of control and harmful AI. “Non-aligned AIs – explains OpenAI in fact – could entail substantial risks for humanity”.

Find out more

To align AI with humanity OpenAI does three things:

Train AI systems using human feedback; trains them to assist human evaluation; it also trains them, expressly, with a continuous search for alignment with human values. This week, US President Biden also began evaluating the need to control “generative” artificial intelligence tools such as ChatGPT, following growing concerns that the technology could be used to discriminate or spread harmful information. As a first step towards a potential regulation, the US Department of Commerce filed a formal public request for comments on what it called accountability measures, including the possibility that potentially risky new AI models must go through a certification process before being released. This type of regulation may recall the concept of “impact assessment”, present in the current draft of the AI ​​Act of the European Commission, which however provides for it only for high-risk AI applications (in the health or safety field for example). A few days ago, always on its blog, OpenAI promised to collaborate with research and political institutions. “We seek to create a global community that works together to address the global challenges of AGI.”

[ad_2]

Source link