This position requires office presence of a minimum of 5 days per week and is only located in the location(s) posted. No relocation is offered.
Join AT&T and reimagine the communications and technologies that connect the world. Our Chief Security Office ensures that our assets are safeguarded through truthful transparency, enforce accountability and master cybersecurity to stay ahead of threats. Bring your bold ideas and fearless risk-taking to redefine connectivity and transform how the world shares stories and experiences that matter. When you step into a career with AT&T, you won’t just imagine the future-you’ll create it.
We are seeking an Application Security Architectto secure the design, development, integration, and operation of AI/ML-enabled applications, includingLLMs, agent-based systems, RAG pipelines, model-serving APIs, and AI orchestration frameworks, as well as advance the vulnerability management program as it relates to AI based vulnerabilities. This role combinesapplication security architecturewithAI security engineeringto reduce risk across the full AI lifecycle – from data ingestion and model integration to inference-time protections and production governance – and leadAI Security from a vulnerability management and risk-reduction perspective. This role is primarily focused on identifying, assessing, prioritizing, and helping remediate security weaknesses across AI-enabled applications, services, models, and integration patterns in order to reduce exploitability and accelerate remediation.
The ideal candidate combines strongApplication Security expertisewith practical experience securingAI/ML systems, LLM-based applications, agentic workflows, and model integrations. This individual should understand both traditional AppSec principles and AI-specific attack patterns and be able to apply that knowledge to improve vulnerability discovery, risk triage, security testing, architecture review, and remediation guidance across the AI lifecycle.
We are looking for a technically minded, hands-on security architect who can evaluate AI implementations for real security risk, define effective controls, partner with engineering teams to remediate issues, and improve how AI-related vulnerabilities are managed across development and production environments. The right candidate will also bring coding aptitude and implementation experience to support secure development workflows, integrate security checks and automation, implement security controls in applications and pipelines, and build practical solutions where necessary to improve coverage, consistency, and speed.
Job Summary:
TheApplication Security Architect is responsible for defining and driving secure-by-design approaches for AI-enabled applications and services. This role focuses on protecting the full lifecycle of AI/ML systems, including:
- LLM-based applications
- Agentic workflows
- Retrieval-augmented generation (RAG)
- Model APIs and inference services
- Training/fine-tuning pipelines
- Third-party AI integrations and SaaS capabilities
The architect will work closely with application teams, enterprise architects, AI/ML engineers, developers, cloud/platform teams, and security stakeholders to establish secure patterns, identify AI-specific risks, implement technical controls, and support responsible adoption of AI capabilities across the organization.
Success in this role requires:
- Deep understanding ofapplication security architecture
- Strong knowledge ofAI/ML technologies, frameworks, and deployment models
- Hands-on experience withAI security controls and implementation
- Ability tocode, automate, integrate, and validatetechnical solutions
- Practical familiarity withAI security standards and threat frameworks
- Hands-on familiarity with source control, repository workflows, CI/CD integration, and artifact/package management, including platforms such as GitHub and JFrog
Detailed Job Description:
This role is centered on securing AI-enabled applications and platforms through a combination ofapplication security architecture, AI threat modeling, technical design review, secure implementation guidance, and control validation.
You will help define how AI solutions are securely adopted and deployed, whether they are built in-house, fine-tuned from existing models, or integrated through third-party APIs and enterprise AI platforms. This includes securing AI-related application flows such as:
- Prompt handling
- Model invocation
- Data retrieval and context injection
- Plugin/tool calling
- Agent permissions and action boundaries
- Output validation and post-processing
- API exposure and service-to-service integration
You will assess and mitigate AI-specific threats such as:
- Prompt injection
- Jailbreaking
- Data poisoning
- Training-data leakage
- Sensitive data exposure
- Model inversion and extraction
- Excessive agency in autonomous workflows
- Unauthorized model/API access
- Abuse of model-serving endpoints
The right candidate will bring anAppSec mindset first—understanding secure design, trust boundaries, authn/authz, API risk, abuse cases, and vulnerability management—while also possessing hands-on familiarity withAI ecosystems, orchestration frameworks, model integration patterns, and AI deployment architectures.
Key Responsibilities:
AI Security Architecture & Design
- Design, review, and validate secure architectural patterns for AI/ML and LLM-enabled applications, includinglocally hosted models, cloud-native AI services, API-based model access, RAG systems, and agent-based workflows.
- Define secure reference architectures for AI integrations across applications, services, and platforms.
- Ensure security is embedded into AI solution design from the start, including trust boundaries, identity controls, data flows, model access, and output handling.
- Advise teams on secure use of frameworks such asAzure AI Foundry, LangChain, Semantic Kernel, OpenAI/Azure OpenAI integrations, and similar orchestration or inference technologies.
AI Threat Modeling & Security Reviews
- Leadthreat modelingsessions for AI-enabled applications and platforms to identify abuse cases, architectural weaknesses, and control gaps.
- Assess risks such asprompt injection, model evasion, data poisoning, jailbreaks, model inversion, model extraction, tool misuse, and unauthorized privilege escalation through agent workflows.
- Conduct technical security reviews of AI applications, integrations, and architectures with clear remediation recommendations and risk prioritization.
- Translate AI threat scenarios into practical mitigations that development and engineering teams can implement.
Guardrails, Controls & Secure Implementation
- Define and implement AI-specific security guardrails, includingprompt/input filtering, context validation, output sanitization, response validation, policy enforcement, model/tool access restrictions, and sensitive data handling controls.
- Recommend and help implement controls forhuman-in-the-loop approvals, action scoping, tool permissions, content safety, and unsafe output suppression in agentic or autonomous systems.
- Validate that security controls are effective in real usage scenarios and resilient against adversarial behavior.
- Support application teams in integrating AI protections into code, middleware, APIs, and orchestration frameworks.
MLSecOps / DevSecOps for AI
- Embed security into the AI/ML development lifecycle by integrating controls intoCI/CD and ML pipelines, including data ingestion, model packaging, deployment, and runtime validation.
- Help implement security scanning and policy checks formodels, datasets, dependencies, containers, APIs, infrastructure-as-code, and deployment pipelines.
- Define secure operational patterns for model versioning, rollback, promotion, and change management.
- Partner with engineering teams to automate repeatable security checks and guardrails across AI-enabled delivery pipelines.
Software Engineering & Repository Security
- Write, review, and where needed help implement code to support AI security controls, automation, integrations, and remediation activities.
- Work within standard software development workflows usingsource control platforms such as GitHub, including branch management, pull requests, code review, and CI/CD integration.
- Partner with engineering teams to secure repositories, workflows, secrets handling, dependency use, and release processes.
- Support secure management of artifacts, packages, containers, and model-related assets through repositories and platforms such asJFrog Artifactory.
- Help establish secure practices for versioning, promotion, provenance, and lifecycle management of code, models, packages, and deployment artifacts.
AI Incident Readiness & Response
- Develop AI-focused incident response guidance and playbooks for scenarios such as prompt-based abuse, sensitive data leakage, poisoning, model misuse, or unauthorized access to AI components.
- Support investigations involving AI-enabled applications by providing architectural context, attack-path analysis, and mitigation recommendations.
- Help teams improve resilience and detection capabilities based on lessons learned from testing, incidents, and near misses.
Vulnerability Management for AI Systems
- Establish processes for identifying, assessing, prioritizing, and tracking vulnerabilities or control gaps inAI-enabled applications, model-serving endpoints, datasets, orchestration layers, and supporting infrastructure.
- Drive risk-based prioritization of AI security issues, balancing exploitability, exposure, data sensitivity, and business impact.
- Support remediation efforts by recommending practical fixes such as architectural changes, guardrail improvements, retraining/tuning strategies, or access-control enhancements.
- Help define how AI-related findings are documented, triaged, and governed within broader AppSec and vulnerability management workflows.
Application Security & Vulnerability Management Focus
- Secure thedata supply chainfor AI systems, including training, tuning, embeddings, vector stores, and contextual retrieval components.
- Protect againstprompt injection and indirect prompt injectionthrough layered controls, trust-boundary design, input validation, and context isolation strategies.
- SecureAPI endpoints serving AI predictions or orchestration actionsusing strong identity, access control, rate limiting, abuse prevention, and logging/traceability.
- Focus onrisk reduction and control effectivenessfor AI vulnerabilities, including cases where mitigation relies on architecture, policy, or model behavior controls rather than traditional patching.
- Ensure securemodel and artifact versioning, provenance awareness, and rollback capabilities in cases of drift, poisoning, or faulty releases.
- Apply traditional AppSec principles—such as secure design, authn/authz, secrets protection, input handling, dependency security, and least privilege—to AI-enabled systems and integrations.
Qualifications / Requirements / Skills:
- 7+ yearsof experience inapplication security, product security, security architecture, or secure software engineering, with at least2–3 years focused on AI/ML or LLM security, AI-enabled application architecture, or adversarial AI security.
- Strong background inapplication security principles and methodologies, including secure design review, threat modeling, vulnerability management, API security, authn/authz, and secure SDLC practices.
- Demonstrated experience securingAI/ML systems, LLM-enabled applications, or AI integration patternsin enterprise or production environments.
- Practical experience withAI models, frameworks, and orchestration technologies, such asAzure AI Foundry, Azure OpenAI/OpenAI APIs, LangChain, Semantic Kernel, Hugging Face, TensorFlow, PyTorch, or similar ecosystems.
- Hands-on experience implementing security controls for AI use cases, includingprompt filtering, output validation, model access controls, data protections, agent/tool guardrails, and monitoring.
- Strong understanding of AI-specific threats such asprompt injection, jailbreaks, model inversion, data poisoning, model extraction, insecure plugins/tools, and sensitive data leakage.
- Demonstrated ability towrite, review, and implement codewhen needed, including scripting, prototyping, automation, integrating security controls into applications and CI/CD pipelines, and building practical solutions to support AppSec and AI security use cases.
- Proficiency in one or more programming/scripting languages such asPython, JavaScript/TypeScript, Go, or Bash;Python strongly preferred, with the ability to work comfortably in existing codebases, automation scripts, and integration layers.
- Experience working withcloud-native platformsand services (Azure preferred; AWS/GCP also valuable), including APIs, containers, IAM, secrets management, logging, and deployment pipelines.
- Strong familiarity with AI and AppSec frameworks such asOWASP LLM Top 10, NIST AI RMF, MITRE ATLAS, and secure architecture principles for AI systems.
- Practical experience working withsource code repositories and modern development workflows, including branching, pull requests, code review, repository hygiene, and CI/CD integration.
- Experience using or supportingGitHub-based development environments, including repository management, Git-based workflows, and security integration into build and deployment pipelines.
- Familiarity withartifact, package, and binary repository management, including platforms such asJFrog Artifactory, to support secure handling of dependencies, build artifacts, containers, models, or related software assets.
- Strong communication skills with the ability to work across engineering, architecture, data science, security, risk, and leadership stakeholders.
Education Requirements:
- Bachelor’s degree inComputer Science, Cybersecurity, Information Security, Software Engineering, Data Science, or a related technical field; or equivalent practical experience.
- Master’s degree in a relevant field is a plus, especially where focused onsecurity, AI/ML, software engineering, or systems architecture.
- Equivalent combination of education, hands-on experience, security engineering, and AI implementation experience will be considered in lieu of formal advanced degrees.
Nice-to-Haves / Preferred or Desired Skills:
- Experience securingagentic AI systems, tool-calling architectures, or autonomous workflows with scoped permissions and human-approval gates.
- Experience withRAG security, including vector database protections, retrieval trust boundaries, document sanitization, and context isolation.
- Hands-on experience evaluating or red-teaming AI systems for jailbreaks, prompt injection, leakage, or unsafe action chaining.
- Experience building internal security tooling, validation harnesses, test frameworks, or policy enforcement layers for AI-enabled applications.
- Familiarity withMLOps/MLSecOps platforms, model registries, feature stores, and secure model lifecycle management.
- Experience withenterprise AI governance, model risk management, or responsible AI control frameworks.
- Relevant certifications or demonstrable equivalent experience in cloud security, application security, AI/ML security, or secure architecture.
- Experience implementing or reviewingGitHub Actions, repository protections, branch controls, and security checks in GitHub-based CI/CD workflows.
- Experience withJFrog Artifactory/Xrayor similar tooling for artifact, package, container, and dependency management.
- Experience contributing directly toshared codebases, internal tooling, or developer security integrationsin enterprise software environments.
- Experience securingsoftware supply chain components, including repositories, dependencies, packages, containers, and build provenance.
Why This Role is Unique:
This role is unique because it sits at the intersection ofApplication Security, AI/ML architecture, and hands-on security engineering. It is not a traditional security governance role, and it is not purely an AI engineering role. We are looking for someone who can bridge both worlds: a candidate who understandshow applications are built and attacked, howAI systems are integrated and abused, and how to translate that into secure architecture, practical controls, and scalable implementation patterns.
This role is an opportunity to shape how AI is adopted securely across the organization by influencing architecture, standards, implementation, and operational guardrails from the ground up. The ideal candidate will help define the future state ofAI-enabled application securitywhile also remaining close enough to the technology to validate designs, code solutions where needed, and solve real-world security problems.
Typical Goals (30/60/90 Days):
30 Days
- Inventory current AI-enabled applications, model integrations, third-party AI services, and major use cases.
- Build an initial view of the organization’s AI attack surface and identify the highest-risk applications or integration patterns.
- Meet with key stakeholders across AppSec, architecture, AI/ML, engineering, platform, and risk functions to understand current capabilities and gaps.
- Review existing standards, deployment patterns, and known AI-related risks.
60 Days
- Establish and socialize a lightweightAI threat modelingandsecure architecture reviewprocess.
- Publish baselineAI application security standardsand secure implementation guidance.
- Prioritize top AI security control gaps and recommend near-term remediation or guardrail opportunities.
- Begin assessment of one or more high-impact AI initiatives or platforms.
90 Days
- Design, develop, and deliver a pilot agentic workflow that automates one high-value AppSec use case end-to-end, such as vulnerability triage, secure coding guidance, or remediation task generation, with human approval built into the process.
- Integrate security controls or checkpoints intoat least two AI/ML or LLM delivery workflows.
- Deliver architecture recommendations and risk treatment plans for the highest-priority AI initiatives.
- Stand up or improve repeatable processes for AI security review, control validation, and issue tracking.
- Help define a roadmap for AI security maturity across the broader AppSec program.
Supervisor:
No
Our Lead Cybersecurity earns between $128,400-$192,600 USD Annual Not to mention all the other amazing rewards that working at AT&T offers. Individual starting salary within this range may depend on geography, experience, expertise, and education/training.
Joining our team comes with amazing perks and benefits:
- Medical/Dental/Vision coverage
- 401(k) plan
- Tuition reimbursement program
- Paid Time Off and Holidays (based on date of hire, at least 23 days of vacation each year and 9 company-designated holidays)
- Paid Parental Leave
- Paid Caregiver Leave
- Additional sick leave beyond what state and local law require may be available but is unprotected
- Adoption Reimbursement
- Disability Benefits (short term and long term)
- Life and Accidental Death Insurance
- Supplemental benefit programs: critical illness/accident hospital indemnity/group legal
- Employee Assistance Programs (EAP)
- Extensive employee wellness programs
- Employee discounts up to 50% off on eligible AT&T mobility plans and accessories,
- AT&T internet (and fiber where available) and AT&T phone.
#LI-Onsite – Full-time office role-
Ready to join our team? Apply today
Weekly Hours:
40
Time Type:
Regular
Location:
Alpharetta, Georgia, Atlanta, Georgia, Bedminster, New Jersey, Bothell, Washington, Dallas, Texas, Middletown, New Jersey, USA:NC:Charlotte / Research Dr - Dat:9139 Research Dr
Salary Range:
$141,300.00 - $237,400.00
It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities.AT&T is a fair chance employer and does not initiate a background check until an offer is made.