This subject focuses on applying prompt engineering practices in high-value professional domains including healthcare, finance, law, marketing, software development, analytics, customer support, and education. Learners will adapt prompting strategies to domain-specific regulations, data types, workflows, and quality expectations.
Upon completion of this subject, learners will be able to analyze domain requirements, risks, and data formats and then design prompts, workflows, and guardrails that fit those constraints. They will be able to create and evaluate prompts for medical summarization, financial analysis, contract review, content marketing, code generation, data analysis, customer service, and instructional design, while respecting domain-specific compliance and accuracy needs.
This topic explores how LLMs can be used to summarize and compare contracts, flag unusual or risky clauses, align documents with internal policy templates, and provide plain-language explanations of legal terms. Learners design prompts that segment documents into clauses, ask models to classify risks or deviations, and generate question lists...
This topic explores how LLMs can be used to summarize and compare contracts, flag unusual or risky clauses, align documents with internal policy templates, and provide plain-language explanations of legal terms. Learners design prompts that segment documents into clauses, ask models to classify risks or deviations, and generate question lists for human counsel. The topic underscores the risks of hallucinated case law or statutes and stresses the importance of retrieval-based approaches that link to authoritative sources. Learners include disclaimers in prompts and outputs and configure systems such that AI suggestions are visibly distinct from official legal opinions. They also reflect on confidentiality constraints for client documents and how to avoid sending sensitive content to third-party APIs when not allowed.
Show moreThis topic covers marketing tasks such as writing blog outlines, ad copy, email sequences, product descriptions, and social media posts. Learners build prompts that inject brand guidelines, tone parameters, target audience definitions, and channel-specific constraints. They explore frameworks for headlines, calls to action, and storytelling that can be encoded into...
This topic covers marketing tasks such as writing blog outlines, ad copy, email sequences, product descriptions, and social media posts. Learners build prompts that inject brand guidelines, tone parameters, target audience definitions, and channel-specific constraints. They explore frameworks for headlines, calls to action, and storytelling that can be encoded into prompt templates. The topic also warns against over-automation that results in generic or spammy content and emphasizes the need for human review. It addresses regulatory guidelines around advertising claims, endorsements, and disclosures. Learners practice AB-testing different prompt templates to discover what yields higher engagement while maintaining ethical and legal standards.
Show moreThis topic focuses on using models such as GitHub Copilot, OpenAI's code models, and Code Llama as assistants in the software development lifecycle. Learners design prompts that provide clear specifications, function signatures, edge cases, and constraints, and then request code in specific languages or frameworks. They practice prompting for unit...
This topic focuses on using models such as GitHub Copilot, OpenAI's code models, and Code Llama as assistants in the software development lifecycle. Learners design prompts that provide clear specifications, function signatures, edge cases, and constraints, and then request code in specific languages or frameworks. They practice prompting for unit tests, comments, and documentation generation. The topic emphasizes safe use, such as scanning generated code for security issues, license conflicts, and performance problems. It highlights that prompt engineers must still understand programming fundamentals to evaluate and adapt generated code. The topic also covers integration with IDE plugins and CI/CD pipelines, and how to prompt for step-by-step explanations of legacy code to support maintenance.
Show moreThis topic explores healthcare applications such as summarizing electronic health records, generating patient discharge instructions, converting clinical language into plain language, and drafting prior authorization letters. It highlights stringent safety requirements and regulations (e.g., HIPAA, GDPR) that govern how protected health information can be used. Learners design prompts that explicitly...
This topic explores healthcare applications such as summarizing electronic health records, generating patient discharge instructions, converting clinical language into plain language, and drafting prior authorization letters. It highlights stringent safety requirements and regulations (e.g., HIPAA, GDPR) that govern how protected health information can be used. Learners design prompts that explicitly state limitations, direct the model to avoid prescribing or diagnosing, and encourage verification by qualified professionals. They also explore how RAG can be integrated with clinical guidelines and formularies. The topic cites emerging research that positions prompt engineering as a key skill for clinicians using AI tools, while reiterating the necessity of oversight and accountability.
Show moreThis topic covers financial-domain prompts such as summarizing annual reports, extracting key ratios, generating management commentary summaries, drafting risk factor descriptions, and explaining market events. It discusses regulatory expectations around disclaimers and suitability, and how LLMs should be framed as tools, not advisors. Learners design prompts that always include caveats,...
This topic covers financial-domain prompts such as summarizing annual reports, extracting key ratios, generating management commentary summaries, drafting risk factor descriptions, and explaining market events. It discusses regulatory expectations around disclaimers and suitability, and how LLMs should be framed as tools, not advisors. Learners design prompts that always include caveats, direct users to consult licensed professionals, and avoid specific buy/sell recommendations. They practice using structured outputs (e.g., JSON with ratio fields) to feed analytics dashboards and discuss how RAG can link models to internal research libraries, macroeconomic datasets, and regulatory filings. Emphasis is placed on testing outputs for numerical correctness and narrative bias, particularly in highly regulated capital markets.
Show moreThis topic addresses prompts for generating analytical code (SQL, Python with pandas, R), designing experiments, interpreting statistical results, and summarizing research papers. Learners practice transforming plain-language questions into structured analytical tasks, such as specifying variables, aggregations, and visualization types. They design prompts that encourage the model to explain statistical assumptions,...
This topic addresses prompts for generating analytical code (SQL, Python with pandas, R), designing experiments, interpreting statistical results, and summarizing research papers. Learners practice transforming plain-language questions into structured analytical tasks, such as specifying variables, aggregations, and visualization types. They design prompts that encourage the model to explain statistical assumptions, limitations, and potential confounders. For literature review, learners craft prompts that request synthesis of findings across multiple papers, comparison of methodologies, and identification of research gaps, while highlighting that final interpretations must be validated against original sources. The topic stresses the risk of fabricated citations and teaches prompt patterns that reduce but do not eliminate this risk, such as asking the model to work only from user-provided abstracts.
Show moreThis topic covers prompts that control behavior of customer service bots across channels such as web chat, messaging apps, and email. It describes how to encode brand tone (formal, friendly, concise), compliance constraints (refund policies, privacy guidelines), and escalation logic (when to hand over to a human). Learners design system...
This topic covers prompts that control behavior of customer service bots across channels such as web chat, messaging apps, and email. It describes how to encode brand tone (formal, friendly, concise), compliance constraints (refund policies, privacy guidelines), and escalation logic (when to hand over to a human). Learners design system prompts that instruct the model to always consult a knowledge base via RAG, to avoid making up policies, and to ask clarifying questions when information is missing. The topic also discusses evaluation of support bots using metrics like resolution rate, CSAT, and containment, and how to refine prompts and flows over time using real conversation logs. Edge cases such as abusive users, highly emotional interactions, and legally sensitive situations are examined with prompt patterns that prioritize safety and documentation.
Show moreThis topic focuses on using prompt engineering to power tutoring systems, content authoring tools, and assessment aids. Learners design prompts that specify target learner profile, prior knowledge assumptions, learning goals, and pedagogical style (Socratic questioning, direct instruction, project-based). They practice generating modular learning objects such as worked examples, analogies, and...
This topic focuses on using prompt engineering to power tutoring systems, content authoring tools, and assessment aids. Learners design prompts that specify target learner profile, prior knowledge assumptions, learning goals, and pedagogical style (Socratic questioning, direct instruction, project-based). They practice generating modular learning objects such as worked examples, analogies, and formative quiz items tagged by Bloom’s level and difficulty. The topic also covers guardrails in educational contexts, including discouraging direct answer giving on graded assignments, encouraging metacognitive reflection, and verifying correctness of explanations. It highlights integration with learning platforms and analytics, where structured outputs can be used to tag content and track learner progress. Bias and inclusivity considerations in educational content are also discussed.
Show more