Friday Five: The U.S. Cyber Workforce Strategy, Concerns at the UN Cybercrime Convention, & More
Cybercrime—and how AI could play an increasing part in it—was at the forefront of the news this past week. Read about this, the White House’s National Cyber Workforce and Education Strategy, and more in this week’s Friday Five!
WHITE HOUSE RELEASES STRATEGY TO EXPAND THE U.S. CYBER WORKFORCE BY ALEXANDRA KELLEY & NATALIE ALMS
The Biden administration introduced its National Cyber Workforce and Education Strategy and its goals of integrating cybersecurity concepts from early childhood education to advanced occupational training. The strategy aims to bridge the gap between cybersecurity jobs and current education--emphasizing digital literacy and non-technical aspects--and it seeks to equip the workforce with technological skills for robust cyber defenses, aligning with national security and economic needs. The plan is said to focus on four pillars: foundational skills, transforming education, expanding the cyber workforce, and strengthening federal cyber capabilities. Diversity is also said to be a cornerstone of the strategy, offering scholarships and stipends to underrepresented areas and institutions. Finally, the strategy also addresses barriers to transitioning between private sector and government roles and extends cybersecurity pay enhancements.
UN CYBERCRIME CONVENTION NEGOTIATIONS ENTER FINAL PHASE WITH TROUBLING SURVEILLANCE POWERS STILL ON THE TABLE BY KATITZA RODRIGUEZ
The final negotiation session for the UN Cybercrime Convention is drawing near, where representatives from 140 countries have debated cross-border police powers, data access, oversight, and more, and the outcome could hold far-reaching implications beyond Vienna and New York. The convention will reportedly reshape global criminal and surveillance laws, impacting millions, including human rights defenders. Current provisions grant broad surveillance powers but are said to lack effective checks, and as a result, concerns arise over domestic spying, content interception, and real-time data collection. Read more about why the human rights safeguards in Articles 5 and 24 are crucial yet still inadequate, and why future safeguards must be robust, including judicial approval, time limits, remedies, and transparency.
CYBERCRIMINALS TRAIN AI CHATBOTS FOR PHISHING, MALWARE ATTACKS BY BILL TOULAS
A new AI hacking tool known as FraudGPT has emerged alongside another being developed that is said to be based on Google's AI experiment Bard. Both created by the same person, these malicious chatbots aid in cybercrime, including phishing, social engineering, and malware creation. FraudGPT was introduced by a user named CanadianKingpin12, who is reportedly training chatbots using dark web data or large language models for cybercrime. Using DarkBERT, a language model developed for cybersecurity, the same creators are additionally developing DarkBART, a sinister version of Google's AI. These tools excel in phishing, social engineering, exploiting vulnerabilities, malware creation, and more. The rise of such AI-driven tools highlights their growing use in cybercrime, posing serious threats to cybersecurity.
GOOGLE WARNS COMPANIES ABOUT KEEPING HACKERS OUT OF CLOUD INFRASTRUCTURE BY TONYA RILEY
Google Cloud released a report this past week warning about the increasing threat of malicious hackers targeting cloud infrastructure using a variety of cyberattacks. The tech giant emphasizes the need for vigilance to protect customers, highlighting a rise in source code hacks and leaks--often caused by credential compromise--misconfigurations, insider threats, and third-party compromise. The report also highlights the uploading of malicious updates to apps on Google Play and attackers targeting software update pipelines. Google Cloud researchers stress that companies should be cautious of hackers using their cloud environments as launching pads for attacks on others.
FBI WARNS OF BROAD AI THREATS FACING TECH COMPANIES AND THE PUBLIC BY AJ VICENS
The FBI has issued a warning about the increasing threat to executives, researchers, and engineers in the AI field from criminal and nation-state hackers aiming to steal intellectual property or data related to advanced chatbots. This risk aligns with the growing accessibility of AI tools like OpenAI's ChatGPT and Google's Bard to the public, as well as companies' ability to develop AI language models. There's a growing concern about the transfer of AI information, including algorithms, expertise, and computing infrastructure through various methods, and the FBI anticipates a rise in targeting and collecting data from US companies, universities, and government research facilities for AI advancements. Additionally, the FBI highlighted the use of AI by cybercriminals for traditional crimes, like fraud and extortion, including the creation of "deepfakes" for harassment or extortion purposes.