Tenemos para ti

Auto Apply

NEW

Crea tu CV con IA

Cartas y emails
con IA

Auto
Apply

NEW

Crea tu CV
con IA

Email
y carta

Ofertas Laborales

JobAdvisor
Sophia PRO

AI Engineer, LLM Systems & Agentic Workflows

CommandLink

About Command|LinkCommand|Link is a global SaaS Platform providing network, voice services, and IT security solutions, helping corporations consolidate their core infrastructure into a single vendor. We have revolutionized the IT industry through our unprecedented innovation and dedication, earning multiple distinguished awards.This is a 100% remote positionAbout Your New RoleAs an AI Engineer, your primary role will be to design, build, and operate the AI layer that powers intelligent automation across the CommandLink platform. You'll tackle challenges in building durable LLM workflows and implementing security controls in production contexts.Key ResponsibilitiesDesign and build robust LLM workflows using Temporal.Work with experts to automate AI-driven solutions across business domains.Implement runtime policy controls for LLM execution.Build frameworks to evaluate LLM outputs alongside production traffic.Defend against prompt injection attacks through advanced design patterns.Integrate tools for defining and auditing security policies.Instrument workflows with comprehensive logging and observability.Collaborate on architecture to set integration standards.What You'll Need for SuccessExperience with complex datasets and production LLM applications.Hands-on experience with Temporal for orchestrating AI workflows.Deep understanding of prompt injection vectors and mitigation strategies.Strong fundamentals in software engineering, particularly in Python and/or Go.Familiarity with LLM APIs and secure integration practices.Why you'll love life at Command|LinkJoin Command|Link to shape the future of business communication and grow in a innovative environment that values your ideas. Enjoy flexible time off, impactful work, and a culture that celebrates strategic thinking.

Hoy, Expira 29/05/2026
Sophia PRO
JobAdvisor
Sophia PRO

AI Engineer, LLM Systems & Agentic Workflows

CommandLink

About Command|Link Command|Link is a global SaaS Platform providing network, voice services, and IT security solutions, helping corporations consolidate their core infrastructure into a single vendor and layering on a proprietary single pane of glass platform. Command|Link has revolutionized the IT industry by tackling the problems our competitors create. In recognition for our unprecedented innovation and dedication, Command|Link was recognized as the SD-WAN Product of the Year, ITSM Visionary Spotlight, UCaaS Product of the Year, NaaS Product of the Year, Supplier of the Year, and the AT&T Strategic Growth Partner. Command|Link has built the only IT platform for scale that solves ISP vendor sprawl and IT headaches. We make it easy for our customers to get more done, maximize uptime and improve the bottom line. Learn more about us here! This is a 100% remote position About Your New Role As an AI Engineer focused on LLM Systems, your primary mandate is to design, build, and operate the AI layer that powers intelligent automation across the CommandLink platform. You'll be working at the engineering layer of agentic AI: building durable, production-grade LLM workflows on top of Temporal, implementing security and policy controls around LLM execution, and solving hard problems around prompt injection, output trust, and runtime governance in domain-specific contexts. You'll work closely with Engineering and Product leads to build agentic workflows to execute deterministic workflows for context aware insights, triage, investigations and remediation into reliable, observable, and policy-compliant AI workflows. That means designing for failure, latency, and adversarial inputs from day one, not retrofitting safety controls after the fact. The space is moving fast, the problems are genuinely unsolved, and we're looking for someone who has strong opinions about how to build AI systems that are trustworthy in production. Key Responsibilities Agentic workflow engineering: design and build multi-step LLM workflows using Temporal as the durable orchestration backbone; handling retries, state, parallelism, human-in-the-loop steps, and long-running agent executionDomain-specific automation: work with subject matter experts to identify, scope, and implement AI-driven automation for specific business and operational domains; own the full delivery from prototype to productionLLM security and policy enforcement: implement runtime policy controls around LLM execution, including prompt injection mitigation, output validation, privilege separation (dual-LLM / quarantined execution patterns), and integration with policy enginesParallel and live evaluation: build evaluation frameworks to assess LLM output quality in parallel with production traffic; implement continuous evals, regression detection, and automated quality gatesPrompt injection defense: apply and adapt state-of-the-art design patterns including the Dual LLM, Plan-Then-Execute, and Code-Then-Execute patterns to harden agent pipelines against adversarial inputsPolicy engine integration: integrate tools such as Sequrity.ai to define, enforce, and audit natural-language security policies over LLM tool use and execution pathsObservability and auditability: instrument AI workflows with full event history, structured logging of prompts and completions, cost tracking, and latency profiling making the behavior of AI systems traceable and debuggableLLM steering and control: implement output steering strategies, structured generation, constrained decoding, and fallback routing to ensure models behave within defined operational envelopesCollaborate on architecture: work across the engineering team to define standards for how AI capabilities are integrated into the product setting patterns others will follow Essential What You'll Need for Success Experience with complex and large datasets2+ years building production LLM-powered applications beyond RAG prototypes; real systems handling real failure modesHands-on experience with Temporal (or equivalent durable execution platforms such as Cadence or Conductor) for orchestrating multi-step, long-running AI workflowsDeep understanding of prompt injection attack vectors, mitigation strategies, and the trade-offs between defense patterns (Dual LLM, CaMeL / Code-Then-Execute, Action-Selector, context minimization)Experience implementing policy controls and guardrails around LLM execution RBAC/PBAC for agents, output filtering, semantic validation, and tool-use restrictionsPractical experience building parallel evaluation pipelines for LLM outputs live evals, shadow scoring, regression suites, and automated quality gatesStrong software engineering fundamentals. You write maintainable, testable code; experience in Python and/or Go preferredFamiliarity with LLM APIs and inference providers (OpenAI, Anthropic, Mistral, or open-weight models via vLLM / Ollama)Understanding of agentic architecture patterns: tool use, multi-agent delegation, structured outputs, memory and context managementExperience integrating LLM systems with external tools and APIs in a secure, auditable wayExperience with langchain or other agentic frameworks Nice To Have Experience with dedicated policy engines for LLM security such as Sequrity.ai, LLM Guard, or equivalent TOML/rules-based policy frameworksFamiliarity with OWASP LLM Top 10 and NIST AI RMF compliance requirementsExperience with structured generation frameworks (Outlines, Instructor, Guidance) for constrained LLM outputsKnowledge of chaos and adversarial testing for AI systems; red-teaming, jailbreak evaluation, and automated adversarial prompt suitesExperience with open-weight model deployment (vLLM, TGI, Ollama) and inference optimizationFamiliarity with MCP (Model Context Protocol) and other protocols for standardised agent tool integrationBackground in security engineering, particularly application-layer threat modelling and or networking and device managementTakes on additional responsibilities and projects as needed to support the success of the team and organization. Why you'll love life at Command|Link Join us at CommandLink, where you'll have the opportunity to shape the future of business communication. We value the innovative spirit and seek individuals ready to bring their unique vision and expertise to a team that values bold ideas and strategic thinking. Are you ready to make an impact? Apply now and be the architect of your career as well as our clients' success. Room to grow at a high-growth companyAn environment that celebrates ideas and innovationYour work will have a tangible impactFlexible time off Fun events at cool locationsEmployee referral bonuses to encourage the addition of great new people to the team At CommandLink, we’re committed to creating a fair, consistent, and efficient hiring experience. As part of our process, we use AI-assisted tools to help review and analyze applications. These tools support our recruiting team by identifying qualifications and experience that align with the requirements of each role. AI tools are used only to assist in the evaluation process — they do not make final hiring decisions. Every application is reviewed by a member of our recruiting or hiring team before any decisions are made.

Hoy, Expira 29/05/2026
Sophia PRO
JobAdvisor
Sophia PRO

AI Engineer, LLM Systems & Agentic Workflows

CommandLink

About Command|Link Command|Link is a global SaaS Platform providing network, voice services, and IT security solutions, helping corporations consolidate their core infrastructure into a single vendor and layering on a proprietary single pane of glass platform. Command|Link has revolutionized the IT industry by tackling the problems our competitors create. In recognition for our unprecedented innovation and dedication, Command|Link was recognized as the SD-WAN Product of the Year, ITSM Visionary Spotlight, UCaaS Product of the Year, NaaS Product of the Year, Supplier of the Year, and the AT&T Strategic Growth Partner. Command|Link has built the only IT platform for scale that solves ISP vendor sprawl and IT headaches. We make it easy for our customers to get more done, maximize uptime and improve the bottom line. Learn more about us here! This is a 100% remote position About Your New Role As an AI Engineer focused on LLM Systems, your primary mandate is to design, build, and operate the AI layer that powers intelligent automation across the CommandLink platform. You'll be working at the engineering layer of agentic AI: building durable, production-grade LLM workflows on top of Temporal, implementing security and policy controls around LLM execution, and solving hard problems around prompt injection, output trust, and runtime governance in domain-specific contexts. You'll work closely with Engineering and Product leads to build agentic workflows to execute deterministic workflows for context aware insights, triage, investigations and remediation into reliable, observable, and policy-compliant AI workflows. That means designing for failure, latency, and adversarial inputs from day one, not retrofitting safety controls after the fact. The space is moving fast, the problems are genuinely unsolved, and we're looking for someone who has strong opinions about how to build AI systems that are trustworthy in production. Key Responsibilities Agentic workflow engineering: design and build multi-step LLM workflows using Temporal as the durable orchestration backbone; handling retries, state, parallelism, human-in-the-loop steps, and long-running agent executionDomain-specific automation: work with subject matter experts to identify, scope, and implement AI-driven automation for specific business and operational domains; own the full delivery from prototype to productionLLM security and policy enforcement: implement runtime policy controls around LLM execution, including prompt injection mitigation, output validation, privilege separation (dual-LLM / quarantined execution patterns), and integration with policy enginesParallel and live evaluation: build evaluation frameworks to assess LLM output quality in parallel with production traffic; implement continuous evals, regression detection, and automated quality gatesPrompt injection defense: apply and adapt state-of-the-art design patterns including the Dual LLM, Plan-Then-Execute, and Code-Then-Execute patterns to harden agent pipelines against adversarial inputsPolicy engine integration: integrate tools such as Sequrity.ai to define, enforce, and audit natural-language security policies over LLM tool use and execution pathsObservability and auditability: instrument AI workflows with full event history, structured logging of prompts and completions, cost tracking, and latency profiling making the behavior of AI systems traceable and debuggableLLM steering and control: implement output steering strategies, structured generation, constrained decoding, and fallback routing to ensure models behave within defined operational envelopesCollaborate on architecture: work across the engineering team to define standards for how AI capabilities are integrated into the product setting patterns others will follow Essential What You'll Need for Success Experience with complex and large datasets2+ years building production LLM-powered applications beyond RAG prototypes; real systems handling real failure modesHands-on experience with Temporal (or equivalent durable execution platforms such as Cadence or Conductor) for orchestrating multi-step, long-running AI workflowsDeep understanding of prompt injection attack vectors, mitigation strategies, and the trade-offs between defense patterns (Dual LLM, CaMeL / Code-Then-Execute, Action-Selector, context minimization)Experience implementing policy controls and guardrails around LLM execution RBAC/PBAC for agents, output filtering, semantic validation, and tool-use restrictionsPractical experience building parallel evaluation pipelines for LLM outputs live evals, shadow scoring, regression suites, and automated quality gatesStrong software engineering fundamentals. You write maintainable, testable code; experience in Python and/or Go preferredFamiliarity with LLM APIs and inference providers (OpenAI, Anthropic, Mistral, or open-weight models via vLLM / Ollama)Understanding of agentic architecture patterns: tool use, multi-agent delegation, structured outputs, memory and context managementExperience integrating LLM systems with external tools and APIs in a secure, auditable wayExperience with langchain or other agentic frameworks Nice To Have Experience with dedicated policy engines for LLM security such as Sequrity.ai, LLM Guard, or equivalent TOML/rules-based policy frameworksFamiliarity with OWASP LLM Top 10 and NIST AI RMF compliance requirementsExperience with structured generation frameworks (Outlines, Instructor, Guidance) for constrained LLM outputsKnowledge of chaos and adversarial testing for AI systems; red-teaming, jailbreak evaluation, and automated adversarial prompt suitesExperience with open-weight model deployment (vLLM, TGI, Ollama) and inference optimizationFamiliarity with MCP (Model Context Protocol) and other protocols for standardised agent tool integrationBackground in security engineering, particularly application-layer threat modelling and or networking and device managementTakes on additional responsibilities and projects as needed to support the success of the team and organization. Why you'll love life at Command|Link Join us at CommandLink, where you'll have the opportunity to shape the future of business communication. We value the innovative spirit and seek individuals ready to bring their unique vision and expertise to a team that values bold ideas and strategic thinking. Are you ready to make an impact? Apply now and be the architect of your career as well as our clients' success. Room to grow at a high-growth companyAn environment that celebrates ideas and innovationYour work will have a tangible impactFlexible time off Fun events at cool locationsEmployee referral bonuses to encourage the addition of great new people to the team At CommandLink, we’re committed to creating a fair, consistent, and efficient hiring experience. As part of our process, we use AI-assisted tools to help review and analyze applications. These tools support our recruiting team by identifying qualifications and experience that align with the requirements of each role. AI tools are used only to assist in the evaluation process — they do not make final hiring decisions. Every application is reviewed by a member of our recruiting or hiring team before any decisions are made.

Hoy, Expira 29/05/2026
Sophia PRO
JobAdvisor
Sophia PRO

AI Engineer, LLM Systems & Agentic Workflows

CommandLink

About Command|Link Command|Link is a global SaaS Platform providing network, voice services, and IT security solutions, helping corporations consolidate their core infrastructure into a single vendor and layering on a proprietary single pane of glass platform. Command|Link has revolutionized the IT industry by tackling the problems our competitors create. In recognition for our unprecedented innovation and dedication, Command|Link was recognized as the SD-WAN Product of the Year, ITSM Visionary Spotlight, UCaaS Product of the Year, NaaS Product of the Year, Supplier of the Year, and the AT&T Strategic Growth Partner. Command|Link has built the only IT platform for scale that solves ISP vendor sprawl and IT headaches. We make it easy for our customers to get more done, maximize uptime and improve the bottom line. Learn more about us here! This is a 100% remote position About Your New Role As an AI Engineer focused on LLM Systems, your primary mandate is to design, build, and operate the AI layer that powers intelligent automation across the CommandLink platform. You'll be working at the engineering layer of agentic AI: building durable, production-grade LLM workflows on top of Temporal, implementing security and policy controls around LLM execution, and solving hard problems around prompt injection, output trust, and runtime governance in domain-specific contexts. You'll work closely with Engineering and Product leads to build agentic workflows to execute deterministic workflows for context aware insights, triage, investigations and remediation into reliable, observable, and policy-compliant AI workflows. That means designing for failure, latency, and adversarial inputs from day one, not retrofitting safety controls after the fact. The space is moving fast, the problems are genuinely unsolved, and we're looking for someone who has strong opinions about how to build AI systems that are trustworthy in production. Key Responsibilities Agentic workflow engineering: design and build multi-step LLM workflows using Temporal as the durable orchestration backbone; handling retries, state, parallelism, human-in-the-loop steps, and long-running agent executionDomain-specific automation: work with subject matter experts to identify, scope, and implement AI-driven automation for specific business and operational domains; own the full delivery from prototype to productionLLM security and policy enforcement: implement runtime policy controls around LLM execution, including prompt injection mitigation, output validation, privilege separation (dual-LLM / quarantined execution patterns), and integration with policy enginesParallel and live evaluation: build evaluation frameworks to assess LLM output quality in parallel with production traffic; implement continuous evals, regression detection, and automated quality gatesPrompt injection defense: apply and adapt state-of-the-art design patterns including the Dual LLM, Plan-Then-Execute, and Code-Then-Execute patterns to harden agent pipelines against adversarial inputsPolicy engine integration: integrate tools such as Sequrity.ai to define, enforce, and audit natural-language security policies over LLM tool use and execution pathsObservability and auditability: instrument AI workflows with full event history, structured logging of prompts and completions, cost tracking, and latency profiling making the behavior of AI systems traceable and debuggableLLM steering and control: implement output steering strategies, structured generation, constrained decoding, and fallback routing to ensure models behave within defined operational envelopesCollaborate on architecture: work across the engineering team to define standards for how AI capabilities are integrated into the product setting patterns others will follow Essential What You'll Need for Success Experience with complex and large datasets2+ years building production LLM-powered applications beyond RAG prototypes; real systems handling real failure modesHands-on experience with Temporal (or equivalent durable execution platforms such as Cadence or Conductor) for orchestrating multi-step, long-running AI workflowsDeep understanding of prompt injection attack vectors, mitigation strategies, and the trade-offs between defense patterns (Dual LLM, CaMeL / Code-Then-Execute, Action-Selector, context minimization)Experience implementing policy controls and guardrails around LLM execution RBAC/PBAC for agents, output filtering, semantic validation, and tool-use restrictionsPractical experience building parallel evaluation pipelines for LLM outputs live evals, shadow scoring, regression suites, and automated quality gatesStrong software engineering fundamentals. You write maintainable, testable code; experience in Python and/or Go preferredFamiliarity with LLM APIs and inference providers (OpenAI, Anthropic, Mistral, or open-weight models via vLLM / Ollama)Understanding of agentic architecture patterns: tool use, multi-agent delegation, structured outputs, memory and context managementExperience integrating LLM systems with external tools and APIs in a secure, auditable wayExperience with langchain or other agentic frameworks Nice To Have Experience with dedicated policy engines for LLM security such as Sequrity.ai, LLM Guard, or equivalent TOML/rules-based policy frameworksFamiliarity with OWASP LLM Top 10 and NIST AI RMF compliance requirementsExperience with structured generation frameworks (Outlines, Instructor, Guidance) for constrained LLM outputsKnowledge of chaos and adversarial testing for AI systems; red-teaming, jailbreak evaluation, and automated adversarial prompt suitesExperience with open-weight model deployment (vLLM, TGI, Ollama) and inference optimizationFamiliarity with MCP (Model Context Protocol) and other protocols for standardised agent tool integrationBackground in security engineering, particularly application-layer threat modelling and or networking and device managementTakes on additional responsibilities and projects as needed to support the success of the team and organization. Why you'll love life at Command|Link Join us at CommandLink, where you'll have the opportunity to shape the future of business communication. We value the innovative spirit and seek individuals ready to bring their unique vision and expertise to a team that values bold ideas and strategic thinking. Are you ready to make an impact? Apply now and be the architect of your career as well as our clients' success. Room to grow at a high-growth companyAn environment that celebrates ideas and innovationYour work will have a tangible impactFlexible time off Fun events at cool locationsEmployee referral bonuses to encourage the addition of great new people to the team At CommandLink, we’re committed to creating a fair, consistent, and efficient hiring experience. As part of our process, we use AI-assisted tools to help review and analyze applications. These tools support our recruiting team by identifying qualifications and experience that align with the requirements of each role. AI tools are used only to assist in the evaluation process — they do not make final hiring decisions. Every application is reviewed by a member of our recruiting or hiring team before any decisions are made.

Hoy, Expira 29/05/2026
Sophia PRO