Dicoogle on AWS

Dicoogle is an open source PACS (Picture Archiving and Communication System) platform designed for flexible and extensible medical imaging workflows. Unlike traditional PACS, Dicoogle is built with a plugin based architecture, allowing users to customize and extend its core features such as indexing, querying, storage, and web interfaces without modifying the core codebase.

Key features include:

During my tenure as a PACS administrator, I developed a deep appreciation for the system's functionality and its ability to integrate with various platforms and technologies. This hands-on experience sparked my interest in pursuing this project, our objective is to successfully deploy and configure the application within the AWS environment.

Let’s dive in

History of PACS

PACS in AWS: Build Infrastructure

Take note of our diagram, these are the AWS components we'll leverage to bring Dicoogle to life.

  1. Create Group: Create a IAM group CodeServerGroup → Attach AWS managed policy to group AmazonEC2FullAccess, AmazonS3FullAccess, AWSCloudFormationFullAccess, AWSDataSyncFullAccess, and SecretsManagerReadWrite
  2. Add Inline Policies: Add the following inline policies to the group.
    **AllowKMS**
    
    "Version": "2012-10-17",
    "Statement": [
    	{
    		"Sid": "AllowKMSCreation",
    		"Effect": "Allow",
    		"Action": [
    			"kms:CreateKey",
    			"kms:DescribeKey",
    			"kms:ListKeys",
    			"kms:ListAliases",
    			"kms:CreateAlias",
    			"kms:ScheduleKeyDeletion",
    			"kms:CreateGrant"
    		],
    		"Resource": "*"
    	}
    ]
    
    							
    **ECRPolicy**
    
    {
    	"Version": "2012-10-17",
    	"Statement": [
    		{
    			"Effect": "Allow",
    			"Action": "ecr:*",
    			"Resource": "*"
    		}
    	]
    }
    							
  3. Create User: We will create Codeserveruser and add it to IAM group CodeServerGroup
  4. Create IDE: AWS depreciated Cloud9 on July 25, 2024; So we'll create our IDE on EC2 → Create an EC2 instance Name: JONHNLCODESERVER → AMI: Amazon Linux 2023 → Instance Type: T2.micro → Create a Key Pair (Save/download the private key! Key pairs are important to securely access EC2 instances. A key pair consists of a Public and Private key. These keys work together using asymmetric cryptography to authenticate users and ensure secure communication.) → Security Group: Allow HTTP and SSH traffic → Launch Instance
  5. Modify EC2 Private Key: In the downloaded private key run chmod 400 ~/"FILENAME".pem (This is the standard and recommended permission setting for an AWS EC2 private key file) → Now, you’ll be able to SSH from VScode or any terminal to your EC2 instance to push code.
  6. Create Private Key for CloudFormation Run the following lines of code.
    # Installs a tool called jq that helps read JSON data files.
    	sudo yum install -y jq
    # Creates a new security key for AWS servers and saves it to a file called dicoogle.pem.
    	aws ec2 create-key-pair --key-name "dicoogle" | jq -r ".KeyMaterial" > ~/dicoogle.pem
    # Makes the key file private so only you can read it (for security).
    	chmod 400 ~/dicoogle.pem
    							
  7. Create S3 Bucket: Store our .YAML files to this bucket. Run the following commands on the EC2 Codeserver JONHNLCODESERVER
    # Creates a random 20-character string to make the bucket name unique.
    	SUFFIX=$( echo $RANDOM | md5sum | head -c 20 )
    # Creates a bucket name by combining "dicoogle-" with the random string.
    	BUCKET=dicoogle-$SUFFIX
    # Creates a new storage bucket in AWS S3 with that unique name.
    	aws s3 mb s3://$BUCKET
    # Displays the name of the bucket that was just created.
    	echo “Bucket name: $BUCKET”
    							
  8. Create Elastic Container Registry (ECR) repo’s Run the following commands on the EC2 Codeserver JONHNLCODESERVER
    # Creates a new encryption key in AWS and saves its ID for later use.
    	KMS_KEY=$( aws kms create-key | jq -r .KeyMetadata.KeyId )
    # Creates a Docker image storage repository called "dicoogle" that's encrypted with the new key.
    	aws ecr create-repository --repository-name dicoogle --encryption-configuration encryptionType=KMS,kmsKey=$KMS_KEY
    # Creates a Docker image storage repository called "nginx" that's encrypted with the same key.
    	aws ecr create-repository --repository-name nginx --encryption-configuration encryptionType=KMS,kmsKey=$KMS_KEY
    # Creates a Docker image storage repository called "ghostunnel" that's encrypted with the same key.
    	aws ecr create-repository --repository-name ghostunnel --encryption-configuration encryptionType=KMS,kmsKey=$KMS_KEY
    							
  9. Download all files from GitHub repo Run the following commands on the EC2 Codeserver JONHNLCODESERVER
    cd ~/environment
    git clone https://github.com/yojon808/AWS-Dicoogle
    							
  10. Build Docker Images and Push to ECRRun the following commands on the EC2 Codeserver JONHNLCODESERVER
    cd ~/environment/dicoogle/docker/dicoogle
    ./build.sh
    cd ~/environment/dicoogle/docker/nginx
    ./build.sh
    cd ~/environment/dicoogle/docker/ghostunnel
    ./build.sh
    							
  11. Upload all .YAML files (14 total files) to S3 bucket. These .YAML files will be for CloudFormation templates. Run the following commands on the EC2 Codeserver JONHNLCODESERVER
    # Changes to the "dicoogle" folder in your environment directory.
    	cd ~/environment/dicoogle
    # Makes the "artifacts.sh" script file executable so you can run it.
    	chmod 755 ./artifacts.sh
    # Runs the "artifacts.sh" script and passes the bucket name (created earlier) as input to the script.
    	./artifacts.sh $BUCKET
    							

Create SSL's

SSL (Secure Sockets Layer) certificates provide 2 main functions. Like an ID card, it proves legitimacy. It also encrypts traffic from your browser to the website which is important if you're working with HIPPA datasets. Without SSL, sensitive information like login credentials, personal details, and payment data could be intercepted and read by unauthorized parties. In this lab we will be using self-signed certificates, for production enviorments PLEASE USE CERTIFICATES FROM A LEGIT CA (Certificate Authority).

  1. Change Directory "Cert" directory:cd ~/environment/dicoogle/cert
  2. Generate Root CA: Run the following command on the EC2 Codeserver .openssl req -x509 -config openssl-ca.cnf -newkey rsa:4096 -sha256 -nodes -out cacert.pem -outform PEM
    1. Uses OpenSSL to create a certificate
    2. Makes it a "CA" (Certificate Authority) type that can sign other certificates
    3. Uses settings from a config file called "openssl-ca.cnf"
    4. reates a strong 4096-bit encryption key
    5. Saves the certificate as "cacert.pem"
    6. Doesn't require a password (nodes = no DES encryption)
  3. Generate NGINX CSR: Run the following command on the EC2 Codeserveropenssl req -config openssl-nginx.cnf -newkey rsa:2048 -sha256 -nodes -out nginxcert.csr -outform PEM
  4. Sign NGINX CSR: Run the following command on the EC2 Codeserveropenssl ca -config openssl-ca.cnf -policy signing_policy -extensions signing_req -out nginxcert.pem -infiles nginxcert.csr
  5. Generate GHOSTUNNEL CSR: Run the following command on the EC2 Codeserveropenssl req -config openssl-ghostunnel.cnf -newkey rsa:2048 -sha256 -nodes -out ghostunnelcert.csr -outform PEM
  6. Sign GHOSTUNNEL CSR: Run the following command on the EC2 Codeserveropenssl ca -config openssl-ca.cnf -policy signing_policy -extensions signing_req -out ghostunnelcert.pem -infiles ghostunnelcert.csr
  7. Generate Client EC2 CSR: Run the following command on the EC2 Codeserveropenssl req -config openssl-client.cnf -newkey rsa:2048 -sha256 -nodes -out clientcert.csr -outform PEM
  8. Sign Client EC2 CSR: Run the following command on the EC2 Codeserveropenssl ca -config openssl-ca.cnf -policy signing_policy -extensions signing_req -out clientcert.pem -infiles clientcert.csr
  9. Generate Storage EC2 CSR: Run the following command on the EC2 Codeserveropenssl req -config openssl-storage.cnf -newkey rsa:2048 -sha256 -nodes -out storagecert.csr -outform PEM
  10. Sign Storage EC2 CSR: Run the following command on the EC2 Codeserveropenssl ca -config openssl-ca.cnf -policy signing_policy -extensions signing_req -out storagecert.pem -infiles storagecert.csr
  11. Create Entries in Secrets Manager.:
    chmod 755 ./secrets.sh
    ./secrets.sh
    								

Deploy

For this to work you'll need a PUBLIC HOSTED ZONE in Route53. Most public domains cost $15/year. Private hosted zones will NOT work. Once we have our Public Hosted Zone we can start to deploy Dicoogle in CloudFormation.

  1. Configure CloudFormation: Go to CloudFormation console → "Create stack" → Choose our S3 bucket and use "dicoogle-main-template.yaml" → Click "Next" → Provide Stack Name: "JONHNL-Dicoogle" → See below for parameters
  2. KeyName: From the dropdown select our "dicoogle" key
  3. S3BucketName: Bucket name should look like this "dicoogle-07f33###############"
  4. DicoogleImage: URI found in ECR Private Repository
  5. NginxImage: URI found in ECR Private Repository
  6. GhostunnelImage: URI found in ECR Private Repository
  7. NginxCert/NginxKey: ARN located in secrets manager
  8. GhostunnelCert/GhostunnelKey: ARN located in secrets manager
  9. CACert: ARN located in secrets manager
  10. DomainName: Our Public Domain found in Route53
  11. HostedZone: From dropdown select out HostedZone ID
  12. AvailabilityZones: Select 2 AZ's within your Region
  13. Contain default value. Input is optional: Keep default.
  14. Configure stack options Click "Next"
  15. Review and create Select the two checkboxes, then "Create stack". This should take 15 to 20 minutes to create.

To Be Continued

Next we will cover How to test bulk upload images to Dicoogle, How to test indexing uploaded images, How to test C-FIND and C-MOVE, and How to clean up the deployed solution.

Stay Tuned!

References