Skip to content

bfeist/ISS-in-real-time

Repository files navigation

ISS in Real Time

Description

This is a web application that replays days on the International Space Station.

Notes

  • Client-side (build time by Vite) .env reference is import.meta.env.VALUE

Installation

  1. To get started, clone the repository and install the dependencies:

    cd iss-in-real-time
    npm install
  2. Then create a .env file by copying .env.sample to .env

  3. Run /scripts/make-dev-ssl-cert.sh (used for docker deploys only)

Usage

Development

To start the frontend in development mode, run:

npm run dev

This will start the Vite development server for the frontend.

Available at http://localhost:8000

Build

To build the application for production:

npm run build

This script builds the application. The result is put in .local/vite/dist.

Deploy via Docker

  • npm run docker:preview:rebuild
    • Builds a docker image:
      • nginx
        • vite is used to build the front-end (React) to static assets in /.local/vite/dist
        • these are copied into the nginx image at the default nginx path
  • npm run docker:preview to start the container
  • Go to https://localhost to hit the nginx server

Structure

  • src/: Contains the source code for the React frontend.
  • src/server-batch/: Contains the source code for the data pipeline that produces the static S3 assets from downloaded public mission data from different public sources. Most developers won't need to run any of these given that they produce the data that the rest of the app expects that is currently publicly hosted at https://ares-iss-in-real-time.s3.us-gov-west-1.amazonaws.com
  • .local/vite/dist: Destination for the built frontend files.

Server Batch script details

Setup

  • Setup paths in .env to specify where downloading and processing will take place
  • Note that the S3 output folder is a local folder on disk. To push this content to an S3 bucket is outside of the scope of these scripts.

The below scripts should be executed in order.

  • download_IA_sg_zips.py This script downloads all of the space-to-ground zip files from internet archive uploaded by John Stoll in Building 2. Current size of this batch is 750GB and takes many days to run.

  • process_transcribe_ia_zips.py Processes the downloaded Internet Archive zip files by extracting WAV audio files, converting them to the required format, segmenting the audio using Voice Activity Detection (VAD), transcribing the segments using WhisperX, and generating JSON transcription files. It also manages tracking of processed and in-progress zip files. Takes many months to run on a RTX 4090 currently resulting in over 3M files.

  • make_s3_comm.py Processes JSON transcript files made in step 2 and converts them into one pipe-delimited CSV file per day and places these in the 'comm' directory on S3. It also copies corresponding AAC audio files to each day's S3 folder.

  • make_s3_dates_available.py Maintains and updates a list of available dates for which data exists in S3. This allows other scripts or services to reference which dates have associated data.

  • make_s3_image_manifest.py Generates astronaut photography image manifests for S3 storage by fetching data from the NASA EOL PhotosDatabaseAPI, and places these manifests into a nested folder structure in S3. Note that these images are served directly from the NASA EOL servers to the browser.

  • make_s3_ephemera.py Downloads ephemera "TLE" data from space-track.org for every month available in S3's comm folder structure and organizes them into the 'ephemera' directory on S3.

  • make_s3_eva_info.py Scrapes wikipedia at https://en.wikipedia.org/wiki/List_of_International_Space_Station_spacewalks and generates a Extra-Vehicular Activity (EVA) related information json and stores it in the root of S3.

  • get_youtube_live_recordings.py Retrieves recorded live stream videos from YouTube that pertain to "station", "spacewalk", or "ISS" keywords. It filters and saves relevant video information to the root of S3 as a pipe-delimited csv file.

  • get_crew_arrival_dep.py Scrapes the table at https://en.wikipedia.org/wiki/List_of_International_Space_Station_expeditions#Completed_expeditions into a json file and stores it at the root of S3.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published