From my first bedtime story to late‑night coding sessions, I’ve always been captivated by the power of myth.

This tool—Hero’s Journey Story Generator—is born from that passion.

It stitches together your personal details and time‑tested narrative arcs to create legends that feel both intimate and epic.

Whether you’re an educator looking to spark students’ imaginations or a developer craving creative side projects, this generator offers:

  • A familiar story framework (Campbell’s Hero’s Journey) that resonates across cultures
  • Rich symbolic depth (Thompson Motif Index) for authentic mythic flavors
  • Local AI inference (Ollama) so you stay in control of your data
  • Offline narration (pyttsx3) for immersive, hands‑free listening
  • Markdown export for seamless integration into blogs, docs, or wikis

I’ve structured this guide to walk you through each component—no theory overload, just practical, tested code you can adapt today.


CTA Image

This book offers an in-depth exploration of Python's magic methods, examining the mechanics and applications that make these features essential to Python's design.

Get the eBook

Objectives

The objectives for the code and also for this article are to create an easy, maintainable solution that shows a clear separation of concerns and good architecture, even for a simple and quirky script like this one.

The objectives are:

  1. Personalize each myth with the user’s name, birthdate, and life stage.
  2. Enrich narratives using authentic folklore motifs.
  3. Structure the code into clear, testable modules.
  4. Secure data and inference by running entirely offline.
  5. Deliver both text and audio outputs for maximum accessibility.

Campbell’s Hero’s Journey Overview

At the heart of every epic lies Joseph Campbell’s Hero’s Journey, a 12‑stage narrative template that underpins myths from all cultures.

Key stages include:

  • Ordinary World – The hero’s familiar environment
  • Call to Adventure – A challenge beckons
  • Refusal of the Call – Initial hesitation
  • Meeting the Mentor – Wise guidance appears
  • Crossing the Threshold – Entering the unknown
  • Road of Trials – Facing tests and ordeals
  • Transformation – Deep metamorphosis
  • Atonement & Reward – Claiming the treasure
  • Return with Elixir – Bringing knowledge back home

By mapping our generator’s prompts to these stages, we ensure each story follows a familiar arc while remaining uniquely personalized.


Ollama: Local LLM Interface

Ollama is a command‑line tool and Python client that hosts and runs large language models on your own machine.

Instead of relying on remote APIs, Ollama pulls models (e.g., llama3.2:1b) locally and serves them via a simple interface.

This allows for:

  • Privacy: No data leaves your device.
  • Performance: Local inference eliminates network latency. Small models can run on a consumer CPU.
  • Flexibility: Swap models by changing a single string in your code.

If you are interested in more AI content, check out my other articles on creating a blog post generator and on using Mistral AI on the CPU to summarize articles:

https://developer-service.blog/building-a-blog-post-generator-with-mistralai-and-streamlit/

https://developer-service.blog/how-to-summarize-articles-with-streamlit-and-langchain-with-mistral-7b-on-cpu/


Thompson Motif Index Explained

The Thompson Motif Index (TMI) is a comprehensive catalog—over 6,000 motifs—that scholars use to track recurring narrative elements across global folklore.

Examples include:

  • B210: Animal helper aids hero
  • U752: Magical mirror reveals truth

By sampling three motifs from a local JSON file, our generator injects symbolic depth into otherwise generic AI output, grounding each tale in mythic tradition.

Obtaining tmi.json

To obtain a well-prepared JSON file that includes all the motifs, follow these steps:

  1. Visit the fbkarsdorp/tmi GitHub repository.
  2. Navigate to data/tmi.json and click Raw.
  3. Save the file to your project directory as tmi.json.
  4. Ensure your script can read it (same folder or update the path).

Architecture Overview

The generator’s pipeline follows a clear, linear flow, ensuring each component interacts seamlessly.

Here’s a visual representation of the architecture:

Architecture Diagram

The flowchart shows three phases—Setup, Generation, and Output—with each method encapsulated in its own cluster for clarity:

  • Setup: check_model()get_user_input()load_motifs()
  • Generation: generate_hero_journey()
  • Output: save_to_markdown()narrate_story()

Architecture Breakdown

  1. Setup
    • check_model(): Verify or pull the selected LLM, guaranteeing local inference.
    • get_user_input(): Capture and validate the hero’s name and birthdate.
    • load_motifs(): Load the TMI JSON and sample three motifs to theme the narrative.
  2. Generation
    • generate_hero_journey(): For each of the six Hero’s Journey stages, build a context-aware prompt (including name, age, motifs) and invoke the LLM to produce the story segment.
  3. Output
    • save_to_markdown(): Compile metadata and all story segments into a styled Markdown document.
    • narrate_story(): Sequentially synthesize speech for each story part, creating an audio companion to the text.

This modular structure ensures each phase is isolated, testable, and easily customizable for future enhancements.


Function Breakdown

In this section, we will break down each of the functions described before and quickly explain their implementation and usage.

Initialization (__init__)

    def __init__(self):
        """
        Initialize the HeroJourneyGenerator with necessary components.
        
        Sets up:
        - Faker for random name generation
        - Text-to-speech engine with configured properties
        - Ollama client for AI story generation
        """
        self.faker = Faker()
        self.engine = pyttsx3.init()
        self.engine.setProperty('rate', 150)  # Speed of speech
        self.engine.setProperty('volume', 0.9)  # Volume (0.0 to 1.0)
        self.ollama_client = Client()
        self.model_name = 'llama3.2:1b'

Code Description:

  • Faker: Generates or sanitizes hero names.
  • TTS engine: Preconfigured for offline narration.
  • Ollama client & model: Establish the LLM interface.

Model Verification (check_model)

	def check_model(self):
        """
        Check if the required Ollama model is available and pull it if needed.
        
        This method:
        1. Checks if the model exists in the local Ollama installation
        2. Pulls the model if it's not found
        3. Shows download progress with a progress bar
        4. Handles any errors during the process
        """
        try:
            # Try to list models to check if Mistral is available
            models = self.ollama_client.list()
            model_names = [model.get('name', '') for model in models.get('models', [])]
            
            if self.model_name not in model_names:
                print(f"\nModel '{self.model_name}' not found. Pulling it now...")
                print("This may take a few minutes depending on your internet connection.")
                
                # Create a progress bar
                with tqdm(total=0, desc="Downloading model", unit="B", unit_scale=True, unit_divisor=1024) as pbar:
                    # Start the pull operation
                    pull_operation = self.ollama_client.pull(self.model_name, stream=True)
                    
                    # Update progress bar based on the stream
                    for chunk in pull_operation:
                        if hasattr(chunk, 'status'):
                            # Update progress if we have completed and total values
                            if hasattr(chunk, 'completed') and hasattr(chunk, 'total'):
                                try:
                                    completed = float(chunk.completed)
                                    total = float(chunk.total)
                                    if total > 0:
                                        # Update total if it changes
                                        if pbar.total != total:
                                            pbar.total = total
                                        
                                        # Update completed bytes
                                        pbar.update(completed - pbar.n)
                                except (ValueError, TypeError):
                                    pass
                            
                            # Handle completion
                            if chunk.status == 'success':
                                pbar.update(pbar.total - pbar.n)
                                break
                            
                            # Add a small delay to make the progress visible
                            time.sleep(0.1)
                
                print(f"\nModel '{self.model_name}' has been pulled successfully!")
        except Exception as e:
            print(f"\nError checking/pulling model: {e}")
            print("\nPlease make sure:")
            print("1. Ollama is installed and running")
            print("2. You have an internet connection")
            print("3. You have enough disk space")
            print("\nYou can manually pull the model by running:")
            print(f"ollama pull {self.model_name}")
            sys.exit(1)

This function ensures the required model is available locally, pulling it if necessary to maintain an offline workflow.

It shows a progress bar tqdm of the model download progress.

User Input (get_user_input)

    def get_user_input(self):
        """
        Collect user information for story customization.
        
        Returns:
            tuple: (name, birthdate) where:
                - name is either user input or a randomly generated name
                - birthdate is a string in YYYY-MM-DD format
        """
        print("\n=== Welcome to the Hero's Journey Story Generator ===")
        
        # Get user input
        name = input("\nEnter your name (or press Enter for a random name): ").strip()
        if not name:
            name = self.faker.name()
            print(f"Generated name: {name}")
            
        # Get birthdate from user
        while True:
            birthdate = input("\nEnter your birthdate (YYYY-MM-DD): ").strip()
            try:
                datetime.strptime(birthdate, '%Y-%m-%d')
                break
            except ValueError:
                print("Invalid date format. Please use YYYY-MM-DD.")
                
        return name, birthdate

Code description:

  • Name: Fallback to Faker when left blank.
  • Birthdate: Loop until a valid ISO format is entered.

Motif Sampling (load_motifs)