{"id":8853,"date":"2024-11-19T12:17:51","date_gmt":"2024-11-19T11:17:51","guid":{"rendered":"https:\/\/republify.se\/?post_type=product&#038;p=8853"},"modified":"2026-01-08T19:04:48","modified_gmt":"2026-01-08T18:04:48","slug":"responsible-ai-in-software-development-hands-on","status":"publish","type":"product","link":"https:\/\/republify.se\/index.php\/produkt\/responsible-ai-in-software-development-hands-on\/","title":{"rendered":"Responsible AI in software development &#8211; hands-on"},"content":{"rendered":"<div class=\"detail-text\">\n<div class=\"detail-text\">\n<div class=\"detail-text\">\n<div class=\"note-wrapper\">\n<div class=\"row\">\n<div class=\"col-12\">\n<div class=\"note-wrapper__text\">\n<div class=\"course-details\">\n<div class=\"row\">\n<div class=\"col-12\">\n<div class=\"detail-text\">\n<p>Generative AI is inevitably transforming the software industry. Tools like ChatGPT or GitHub Copilot enable developers to code more efficiently than ever before. While this sparks excitement, it also raises concerns, and so many stakeholders tend to balance this optimism with caution. Though these tools are advancing rapidly, to date they still lack the necessary sophistication to consider various subtle but important aspects of software products. This course emphasizes the importance of understanding this evolution through the well-established principles of Responsible AI.<\/p>\n<p>The training then highlights the capabilities and limitations of generative AI (GenAI) tools &#8211; like GitHub Copilot, Codeium or others -, offering insights into their role in code generation and beyond. Topics include smart prompt engineering, not only during the implementation phase, but also during requirements capturing, design, testing, and maintenance. Participants will learn best practices and pitfalls of using AI-generated code, with hands-on labs demonstrating potential security flaws such as dependency hallucination and path traversal. By the end, software engineers and managers will have a clear understanding of how to responsibly integrate GenAI tools into the various stages of the software development lifecycle.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<h2>Outline<\/h2>\n<p>\u2022 A brief history of Artificial Intelligence<br \/>\n\u2022 Responsible AI<br \/>\n\u2022 An overview of AI and ML security<br \/>\n\u2022 Using GenAI responsibly in software development<br \/>\n\u2022 Summary and takeaways Standards and references<\/p>\n<h3><\/h3>\n<h3>What you&#8217;ll have learned<\/h3>\n<p>\u2022 Understand various aspects of responsible AI<br \/>\n\u2022 Essentials of machine learning security<br \/>\n\u2022 How to use generative AI responsibly in software development<br \/>\n\u2022 Prompt engineering for optimal outcomes<br \/>\n\u2022 How to apply generative AI throughout the SDLC<\/p>\n<\/div>\n<h2><\/h2>\n<h2>Content<\/h2>\n<p><strong>A brief history of Artificial Intelligence<\/strong><\/p>\n<p>The origins of AI<\/p>\n<p>Neural networks and &#8221;probability engines&#8221;<\/p>\n<p>Robustness of ML systems<\/p>\n<p>Early ML coding tools<\/p>\n<p>The AI coding revolution of the 2020s<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Responsible AI<\/strong><\/p>\n<p>What is responsible AI?<\/p>\n<p>Explainability and interpretability<\/p>\n<p>Safety, security and resilience<\/p>\n<p>Mitigation of harmful bias<\/p>\n<p>Reproducibility and consistency<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Lab \u2013 Experimenting with reproducibility in Copilot<\/strong><\/p>\n<p>Security and responsible AI in software development<\/p>\n<p>&nbsp;<\/p>\n<p><strong>An overview of AI and ML security<\/strong><\/p>\n<p>A quick overview of ML for non-specialists<\/p>\n<p>GIGO and other well-known ML pitfalls<\/p>\n<p>Malicious use of AI<\/p>\n<p>Real-life attacks against AI<\/p>\n<p>Subverting AI to attack others<\/p>\n<p>AI and ML security standards<\/p>\n<p>A quick look at ML hacking: evasion<\/p>\n<p>A quick look at ML hacking: poisoning<\/p>\n<p>A quick look at ML hacking: model inversion<\/p>\n<p>A quick look at ML hacking: model stealing<\/p>\n<p>&nbsp;<\/p>\n<p><strong>The security of large language models<\/strong><\/p>\n<ul>\n<li>Security of LLMs vs ML security<\/li>\n<li>OWASP LLM Top 10<\/li>\n<li>Practical attacks on LLMs<\/li>\n<li>Practical LLM defenses<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><strong>Using GenAI responsibly in software development<\/strong><\/p>\n<p>LLM code generation basics<\/p>\n<p>Basic building blocks and concepts<\/p>\n<p>GenAI tools in coding: Copilot, Codeium and others<\/p>\n<p>Can AI\u2026 take care of the &#8217;boring parts&#8217;?<\/p>\n<p>Can AI\u2026 be more thorough?<\/p>\n<p>Can AI\u2026 teach you how to code?<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Lab \u2013 Experimenting with an unfamiliar API in Copilot<\/strong><\/p>\n<p>GenAI as a productivity boost<\/p>\n<p>&nbsp;<\/p>\n<p><strong>The dark side of GenAI<\/strong><\/p>\n<ul>\n<li>Reviewing generated code \u2013 the black box blues<\/li>\n<li>The danger of hallucinations<\/li>\n<li>The effect of GenAI on programming skills<\/li>\n<li>Where AI code generation doesn&#8217;t do well<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><strong>Prompt engineering techniques for code generation<\/strong><\/p>\n<ul>\n<li>Why is a good prompt so important?<\/li>\n<li>Zero-shot, few-shot, and chain of thought prompting<\/li>\n<li>Lab \u2013 Experimenting with prompts in Copilot<\/li>\n<li>Using prompt patterns for code generation<\/li>\n<li>Software design patterns vs prompt patterns<\/li>\n<li>The 6 categories of prompt patterns<\/li>\n<li>Using various prompt patterns<\/li>\n<li>Best practices and pitfalls for code-generating AI prompts<\/li>\n<li>Least-to-Most: decomposition of complex tasks<\/li>\n<li>Lab \u2013 Task decomposition with Copilot<\/li>\n<li>The importance of examples and avoiding ambiguity<\/li>\n<li>Unit tests, TDD and GenAI<\/li>\n<li>Lab \u2013 Test-based code generation with Copilot<\/li>\n<li>Establishing the context for generative AI<\/li>\n<li>Lab \u2013 Experimenting with context in Copilot<\/li>\n<li>Enforcing and following token limits<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><strong>Integrating generative AI into the SDLC<\/strong><\/p>\n<ul>\n<li>Using GenAI beyond code generation<\/li>\n<li>Using AI during requirements specification<\/li>\n<li>Prompt patterns for requirements capturing<\/li>\n<li>Software design and AI<\/li>\n<li>Prompt patterns for software design<\/li>\n<li>Using AI during implementation<\/li>\n<li>Prompt patterns for implementation<\/li>\n<li>Lab \u2013 Finding hidden assumptions with Copilot<\/li>\n<li>Using AI during testing and QA<\/li>\n<li>Using AI during maintenance<\/li>\n<li>Prompt patterns for refactoring<\/li>\n<li>Lab \u2013 Experimenting with code refactoring in Copilot<\/li>\n<li>Prompt patterns for change request simulation<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><strong>Security of AI-generated code<\/strong><\/p>\n<ul>\n<li>Security of AI generated code<\/li>\n<li>Practical attacks against code generation tools<\/li>\n<li>Dependency hallucination via generative AI<\/li>\n<li>Case study \u2013 A history of GitHub Copilot weaknesses (up to mid 2024)<\/li>\n<li>A sample vulnerability<\/li>\n<li>Path traversal<\/li>\n<li>Lab \u2013 Path traversal<\/li>\n<li>Path traversal-related examples<\/li>\n<li>Additional challenges in Windows<\/li>\n<li>Case study \u2013 File spoofing in WinRAR<\/li>\n<li>Path traversal best practices<\/li>\n<li>Lab \u2013 Path canonicalization<\/li>\n<li>Lab \u2013 Experimenting with path traversal in Copilot<\/li>\n<li>Summary and takeaways<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p><strong>Responsible AI principles in software development<\/strong><\/p>\n<p><strong>Resources and additional guidance<\/strong><\/p>\n<div class=\"detail-text\">\n<div class=\"note-wrapper\">\n<div class=\"row\">\n<div class=\"col-12\">\n<div class=\"note-wrapper__text\">\n<div class=\"course-details\">\n<div class=\"row\">\n<div class=\"col-12\">\n<div class=\"detail-text\">\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"note-wrapper\">\n<div class=\"row\">\n<div class=\"col-12\">\n<h4 class=\"section-title note-wrapper__title\">Note:<\/h4>\n<div class=\"note-wrapper__text\">\n<p><em>A must-have primer to those concerned about using GenAI tools in their software development projects. Building on these foundations, and depending on the technology stack, we suggest continuing with one of the Generative AI courses &#8211; see Code responsibly with generative AI in C++\/Java\/C#\/Python. However, if you develop machine learning solutions, you can also continue your journey with the comprehensive 4-day Machine learning security course.<\/em><\/p>\n<p>&nbsp;<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div class=\"detail-text\">\n<h3>Kursen levereras i samarbete med<\/h3>\n<p><img decoding=\"async\" class=\"alignnone size-medium wp-image-1202 lazyload\" data-src=\"https:\/\/republify.se\/wp-content\/uploads\/2022\/02\/cydrill_logo-300x83.jpg\" alt=\"\" width=\"300\" height=\"83\" data-srcset=\"https:\/\/republify.se\/wp-content\/uploads\/2022\/02\/cydrill_logo-300x83.jpg 300w, https:\/\/republify.se\/wp-content\/uploads\/2022\/02\/cydrill_logo.jpg 427w\" data-sizes=\"(max-width: 300px) 100vw, 300px\" src=\"data:image\/svg+xml;base64,PHN2ZyB3aWR0aD0iMSIgaGVpZ2h0PSIxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciPjwvc3ZnPg==\" style=\"--smush-placeholder-width: 300px; --smush-placeholder-aspect-ratio: 300\/83;\" \/><\/p>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p><strong>After a short overview of AI and specifically responsible AI, participants delve into the complex world of machine learning (ML), focusing on how these solutions can be compromised.<\/strong><\/p>\n<p>Threats and vulnerabilities such as model evasion, poisoning, and inversion attacks are explained in a simple way, via real-world case studies and live demonstrations. Finally, we overview the security challenges of large language models (LLMs), exploring the practical defenses as well.\u00a0<em>The training is delivered teacher-led online or on-site as group training<\/em><\/p>\n","protected":false},"featured_media":9022,"comment_status":"open","ping_status":"closed","template":"","meta":{"wds_primary_product_brand":0,"wds_primary_product_cat":0},"product_brand":[],"product_cat":[137,134,73,57,31,203],"product_tag":[],"class_list":{"0":"post-8853","1":"product","2":"type-product","3":"status-publish","4":"has-post-thumbnail","6":"product_cat-ai","7":"product_cat-cyber-security","8":"product_cat-java","9":"product_cat-secure-coding","10":"product_cat-security","11":"product_cat-industri","13":"first","14":"instock","15":"taxable","16":"shipping-taxable","17":"purchasable","18":"product-type-simple"},"jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/republify.se\/index.php\/wp-json\/wp\/v2\/product\/8853","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/republify.se\/index.php\/wp-json\/wp\/v2\/product"}],"about":[{"href":"https:\/\/republify.se\/index.php\/wp-json\/wp\/v2\/types\/product"}],"replies":[{"embeddable":true,"href":"https:\/\/republify.se\/index.php\/wp-json\/wp\/v2\/comments?post=8853"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/republify.se\/index.php\/wp-json\/wp\/v2\/media\/9022"}],"wp:attachment":[{"href":"https:\/\/republify.se\/index.php\/wp-json\/wp\/v2\/media?parent=8853"}],"wp:term":[{"taxonomy":"product_brand","embeddable":true,"href":"https:\/\/republify.se\/index.php\/wp-json\/wp\/v2\/product_brand?post=8853"},{"taxonomy":"product_cat","embeddable":true,"href":"https:\/\/republify.se\/index.php\/wp-json\/wp\/v2\/product_cat?post=8853"},{"taxonomy":"product_tag","embeddable":true,"href":"https:\/\/republify.se\/index.php\/wp-json\/wp\/v2\/product_tag?post=8853"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}