Jaron Mink works at the intersection of usable security, machine learning, and system security.
Their work explores how human interaction impacts ML security in two ways: How human factors can be 1) exploited to reduce security and 2) harnessed to improve security. Since ML-enabled abuse is becoming increasingly common, he investigates how lay users perceive and react to new attacks, e.g., how social media users react to deepfakes. As ML is beginning to be applied in security-critical systems, Mink evaluates how usable these tools are for technical users, e.g., how easy it is for ML developers to apply security defenses.