Weaponizing Artificial Intelligence Large Language Models (LLMs)

RedSense examines how successful the adversaries are in weaponizing Large Language Models (LLMs) to malicious exploits. Our previous report highlighted the Royal group’s (now BlackSuit) attempt to manipulate ‘WormGPT’ aiming for enhanced lateral movement techniques and a focus on effective CobaltStrike deployment. Their efforts were, however, unfruitful.

Our newest discovery is from ‘GPT Bypass’, a Telegram channel with 5,000 subscribers. See the full explaination in RedSense Chief Research Officer Yelisey Bohuslavskiy’s post.

Full Article