Adobe partners with ethical hackers for building protected, more secure AI tools

As we keep on to consolidate generative AI into our daily lives, it’s crucial to comprehend and reduce the risks  arising from its usage. Adobe's  undergoing commitment to elevating secure, safe, and reliable AI involves transparency about the capacities and constraints of big language models (LLMs).

For a long time, Adobe has put a focus on forming a strong foundation of cybersecurity, which is built on a culture of collaboration, enabled by strong partnerships, talented professionals,  leading edge capabilities, and deep engineering expertise. Research is prioritised by it and collaborating with the wider industry on preventing risks by  developing and employing AI in a responsible manner

Adobe has been actively engaged with partners, standards organizations, and security researchers for many years to collectively enhance the security of our products. Adobe  receives reports directly and through our presence on the HackerOne platform and are continually looking at ways to further engage with the community and open feedback to enhance our products and innovate responsibly.

Commitment to responsible AI innovation

Today, Adobe announced the expansion of the Adobe bug bounty program to reward security researchers for discovering and responsibly disclosing bugs specific to our implementation of Content Credentials and Adobe Firefly. By fostering an open dialogue, Adobe aims to encourage fresh ideas and perspectives while providing transparency and building trust.

Content Credentials are built on the C2PA open standard and serve as tamper-evident metadata that can be attached to digital content to provide transparency about their creation and editing process. Content Credentials are currently integrated across popular Adobe applications such as Adobe Firefly, Photoshop, Lightroom and more. We are crowdsourcing security testing efforts for Content Credentials to reinforce the resilience of Adobe’s implementation against traditional risks and unique considerations that come with the provenance tool, such as the potential for intentional abuse of Content Credentials by incorrectly attaching them to the wrong asset.

Adobe Firefly is a family of creative generative AI models available as a standalone web application at firefly.adobe.com as well as through features powered by Firefly in Adobe flagship applications. We encourage security researchers to review the OWASP Top 10 for Large Language Models, such as prompt injection, sensitive information disclosure, or training data poisoning to help focus their research efforts to pinpoint weaknesses in these AI-powered solutions.

 

Safer, more secure generative AI models

By proactively engaging with the security community, Adobe hopes to gain additional insights into our generative AI technologies which, in turn, will provide valuable feedback to our internal teams and security program. This feedback will help identify key areas of focus and opportunities to reinforce security.

In addition to the hacker-powered research of Adobe, we leverage our robust security program that includes penetration testing, red-teaming, code scanning to continually enhance the security of our products and systems, including Adobe Firefly and our Content Credentials implementation.

The skills and expertise of security researchers play a critical role in enhancing security and now can help combat the spread of misinformation,” said Dana Rao, executive vice president, general counsel and chief trust officer at Adobe. “We are committed to working with the broader industry to help strengthen our Content Credentials implementation in Adobe Firefly and other flagship products to bring important issues to the forefront and encourage the development of responsible AI solutions.”

“Building safe and secure AI products starts by engaging experts who know the most about this technology’s risks. The global ethical hacker community helps organizations not only identify weaknesses in generative AI but also define what those risks are,” said Dane Sherrets, senior solutions Architect at HackerOne. “We commend Adobe for proactively engaging with the community, responsible AI starts with responsible product owners.”

“It’s great to see the scope of products widen to encompass areas such as artificial intelligence, combatting misinformation, Internet of Things, and even cars. These additions may require additional training for ethical hackers to acquire the necessary skills to uncover critical vulnerabilities," said Ben Sadeghipour, founder of NahamSec. “Bug Bounty Village is committed to expanding our workshops and partnering with more organizations, like Adobe, to ensure security researchers are equipped with the right tools to protect these technologies.”

These are early steps toward ensuring the safe and secure development of generative AI and Adobe knows the work is just getting started. The future of technology is exciting, but there can be implications if these innovations are not built responsibly. Adobe's hope is that by incentivizing more security research in these areas, Adobe will spark even more collaboration with the security community and others, to ultimately make AI safer for everyone.

Media
@adgully

News in the domain of Advertising, Marketing, Media and Business of Entertainment

More in Media