A degree in Computer Science, Information Technology or equivalent professional experience
A strong software engineering background (backend and/or full stack), combined with a product mindset and consulting experience
Hands‑on experience delivering (generative) AI solutions into production, not limited to demos or prototypes
Solid knowledge of common LLM application patterns, including RAG, graph‑based approaches, embeddings, vector search, tool or function calling, agents and guardrails
Experience with at least one LLM framework or stack, such as Semantic Kernel, LangChain or LlamaIndex, as well as structured prompt engineering
The ability to evaluate AI systems systematically, including offline and online evaluation, benchmark design, qualitative reviews and regression testing for prompts and retrieval
Experience with distributed systems and modern integration patterns such as APIs, microservices, service‑oriented architectures and event‑driven design
Proficiency in at least one programming language and framework, for example Python, TypeScript/Node.js, C# or Java, along with strong API design skills
Solid cloud knowledge across Azure, AWS or GCP; experience with Azure AI services (e.g. Azure OpenAI, Azure AI Search, Document Intelligence) is a plus
A DevOps mindset, including Git, CI/CD, containers, infrastructure as code (e.g. Terraform or Bicep), observability (logs, metrics, tracing) and cost awareness
Security fundamentals, such as OAuth2/OIDC, authentication and authorisation, secrets management, encryption and secure coding practices
Technical leadership or mentoring experience is a plus, complemented by strong communication skills, a proactive attitude, problem‑solving capability and attention to detail
Fluent English for close collaboration within the team; German is a plus, but not mandatory