Detecting integrity violations (e.g. misinformation, violence incitement, hate) on social media platforms is key to keeping the information ecosystems safe and secure. Traditionally, platforms use keywords to detect these harms. However, adversarial actors have adapted and learned to evade Standard detection approaches. This can make detecting harm difficult both over time and at scale.We develop a novel method for modeling adversarial harmful movements as an interaction graph and then leverage the graph structure to efficiently learn language and signal adaptations. The proposed approach marries network and text mining techniques to pull signals from noisy text data to efficiently learn movement narratives and frames. Using data from Facebook, we demonstrate a proof of concept on conspiracy-based misinformation movements circulating on the platform. We show that our networked approach outperforms standard text mining approaches. This work highlights how leveraging both structure (i.e. how users interact with one another in an information space) and content (i.e. the content users produce), allows one to better model context. Our work offers an approach for extracting insights from noisy social media data.