Is Mining Litecoins on AWS EC2 Profitable? PART 2: GPU Mining


In part 1, we looked at mining Litecoins on CPUs rented from Amazon EC2. Now, let us see if we can get better performance by mining Litecoins using GPUs.

Using CPUs, we were able to achieve an average hash rate of 144 KH/s using Amazon EC2’s c3.8xlarge instances, that come with 32 CPUs. Recently, Amazon made available their new generation of GPU instances, called g2.2xlarge, that provide access to NVIDIA GRID GPUs (“Kepler” GK104) each with 1,536 CUDA cores and 4GB of video memory. GPUs are supposed to provide better performance than CPUs when mining Litecoins. Is this true of the virtual computing instances provided by Amazon EC2? Let us find out.

1. Set Up g2.2xlarge Spot Instance on Amazon EC2

As before we need to have our AWS account and Litecoin mining pool worker set up.

Go to the EC2 Management Console, then click on Spot Requests on the left, then Request Spot Instances. You will be asked to choose an Amazon Machine Image (AMI). GPU instances require AMIs based on hardware-assisted virtualization (HVM), so make sure that you select a HVM AMI. The code below is for the Ubuntu Server 12.04.3 LTS for HVM Instances (64 bit) AMI, but you can choose another HVM AMI if you know how to set it up.

Next, choose the GPU instance of type g2.2xlarge, and configure it with the following:

  • Number of instances: 1
  • Purchasing option: check Request Spot Instances
  • Maximum price: the maximum price you are willing to pay for the instance. A good rule of thumb is to use the current price shown on the screen, or slightly more. At the time of writing, g2.2xlarge Spot Instances were going for about $0.30.

You can use the default values for the other fields, or customise them if you are familiar with Amazon EC2.

Once you have configured the instance, click Launch. A dialogue box will appear, asking you to select or create a key pair. If you have already created an Amazon EC2 key pair, you can select it from the dropdown list. Otherwise, select Create a new key pair, enter a name for it (for example, “LTC”) and click Download Key Pair. You will get a file like LTC.pem, which contains the private key you will need to log in to your new server.

Wait a few minutes, and if your Spot Request was fulfilled successfully, you should see a new running instance in the Instances section of the EC2 Management Console.

2. Set up Required Software

Log in to your newly instantiated server using SSH. Windows users can use PuTTY. You will need the public URL of the server, which you can find from the EC2 Management Console (it will look something like

chmod 400 LTC.pem
ssh -i LTC.pem ubuntu@INSTANCE_PUBLIC_URL

Upgrade preinstalled packages (optional):

sudo apt-get update
sudo apt-get upgrade

Install required packages:

sudo apt-get install build-essential libcurl4-openssl-dev git

Install NVIDIA drivers.

sudo sh

Accept the license agreement, and install the NVIDIA driver and CUDA toolkit into the default location:

Do you accept the previously read EULA? (accept/decline/quit): accept
Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 319.37? ((y)es/(n)o/(q)uit): y
Install the CUDA 5.5 Toolkit? ((y)es/(n)o/(q)uit): y
Enter Toolkit Location [ default is /usr/local/cuda-5.5 ]:
Install the CUDA 5.5 Samples? ((y)es/(n)o/(q)uit): n

Use a text editor such as nano to add these lines to the end of your ~/.bashrc file:

export PATH=/usr/local/cuda-5.5/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-5.5/lib64:$LD_LIBRARY_PATH

Reload ~/.bashrc:

source ~/.bashrc

Install CudaMiner:

git clone
cd CudaMiner

3. Start Mining

Finally, start cudaminer, using the pool URL, worker name and worker password from above:

./cudaminer --url=stratum+tcp://POOL_URL --userpass=WORKER_NAME:WORKER_PASSWORD -H 1 -C 1

The -H 1 tells cudaminer to distribute SHA256 hashing evenly to all CPU cores, and -C 1 turns on the texture cache for mining, which should improve performance on Kepler GPUs.

You should see output similar to:

	   *** CudaMiner for nVidia GPUs by Christian Buchner ***
	             This is version 2013-12-01 (beta)
	based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler
	       Cuda additions Copyright 2013 Christian Buchner
	   My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm

[2013-12-02 14:52:58] 1 miner threads started, using 'scrypt' algorithm.
[2013-12-02 14:52:58] Starting Stratum on stratum+tcp://
[2013-12-02 14:53:22] GPU #0: GRID K520 with compute capability 3.0
[2013-12-02 14:53:22] GPU #0: interactive: 0, tex-cache: 1D, single-alloc: 1
[2013-12-02 14:53:22] GPU #0: Performing auto-tuning (Patience...)
[2013-12-02 14:53:22] GPU #0: maximum warps: 459
[2013-12-02 14:55:35] GPU #0:  158.94 khash/s with configuration K32x14
[2013-12-02 14:55:35] GPU #0: using launch configuration K32x14
[2013-12-02 14:55:35] GPU #0: GRID K520, 14336 hashes, 0.09 khash/s
[2013-12-02 14:55:35] GPU #0: GRID K520, 14336 hashes, 80.29 khash/s
[2013-12-02 14:55:38] GPU #0: GRID K520, 430080 hashes, 154.46 khash/s
[2013-12-02 14:55:44] accepted: 1/1 (100.00%), 154.46 khash/s (yay!!!)
[2013-12-02 14:55:49] GPU #0: GRID K520, 1763328 hashes, 158.17 khash/s
[2013-12-02 14:55:51] GPU #0: GRID K520, 286720 hashes, 151.47 khash/s
[2013-12-02 14:55:54] accepted: 2/2 (100.00%), 151.47 khash/s (yay!!!)
[2013-12-02 14:55:54] accepted: 3/3 (100.00%), 151.47 khash/s (yay!!!)


As you can see from the output above, the GPU is hitting about 158 KH/s. This is only slightly higher than what we achieved with CPU mining, but GPU instances cost $0.30 per hour, significantly less than $2.00 for CPU instances. Will this translate to profitable Litecoin mining? Unfortunately, no, it will just reduce our losses by an order of magnitude:

Litecoin Mining Calculation for GPU

CPU + GPU Combo

What if we were to use both the CPU and GPU to mine? Will that give us a boost in performance? To find this out, we will use screen, a useful tool that allows us to run different processes in separate terminal “screen”, and switch back and forth between them.

First, we need to install minerd, and start screen:

cd ~
tar -xzf pooler-cpuminer-2.3.2.tar.gz
cd cpuminer-2.3.2/
./configure CFLAGS="-O3"
cd ..

Start GPU mining:

cd ~/CudaMiner
./cudaminer --url=stratum+tcp://POOL_URL --userpass=WORKER_NAME:WORKER_PASSWORD -H 1 -C 1

Press Ctrl+a Ctrl+c to start a new screen, which we will use for CPU mining:

cd ~/cpuminer-2.3.2
./minerd --url=stratum+tcp://POOL_URL --userpass=WORKER_NAME:WORKER_PASSWORD

Now, you can use Ctrl+a Ctrl+a to toggle between the two screens. With this set up, I got about 140 KH/s from CudaMiner and 36 KH/s from minerd, for a combined 176 KH/s. This is only a slight increase in hashrate, and still does not make mining with Amazon EC2 GPU instances profitable.


We have tried renting virtual CPUs and GPUs from Amazon EC2 to mine Litecoins, with disappointing results. As expected, GPUs provide greater mining performance per dollar spent than CPUs, but mining Litecoins on Amazon EC2 is still not profitable. Mining is probably best left to those who have a lot of resources to buy and operate large quantities of mining hardware for economies of scale. Even so, profits are not assured, even with today’s record-breaking Litecoin prices. For the rest of us, buying Litecoins with cash would be the cheaper and more reliable way to get some Litecoins in our wallets.


7 thoughts on “Is Mining Litecoins on AWS EC2 Profitable? PART 2: GPU Mining

  1. Minister Mario F. Stevenson (Stevenson-El-Quyusufyyat) says:

    Event though it was one year ago to-date, the process was still comparable to-date. It makes for good information.

  2. Naveen says:

    Hi Aloysius, I tried your steps, but with CUDA 6.5 and cudaminer latest version, but i keep running into the error when i install cudaminer

    Any help would be greatly appreciated.


    gcc -std=gnu99 -DHAVE_CONFIG_H -I. -msse2 -fopenmp -pthread -fno-strict-aliasing -I./compat/jansson -DSCRYPT_KECCAK512 -DSCRYPT_CHACHA -DSCRYPT_CHOOSE_COMPILETIME -O3 -MT cudaminer-cpu-miner.o -MD -MP -MF .deps/cudaminer-cpu-miner.Tpo -c -o cudaminer-cpu-miner.o `test -f ‘cpu-miner.c’ || echo ‘./’`cpu-miner.c
    mv -f .deps/cudaminer-cpu-miner.Tpo .deps/cudaminer-cpu-miner.Po
    gcc -std=gnu99 -DHAVE_CONFIG_H -I. -msse2 -fopenmp -pthread -fno-strict-aliasing -I./compat/jansson -DSCRYPT_KECCAK512 -DSCRYPT_CHACHA -DSCRYPT_CHOOSE_COMPILETIME -O3 -MT cudaminer-util.o -MD -MP -MF .deps/cudaminer-util.Tpo -c -o cudaminer-util.o `test -f ‘util.c’ || echo ‘./’`util.c
    mv -f .deps/cudaminer-util.Tpo .deps/cudaminer-util.Po
    gcc -std=gnu99 -DHAVE_CONFIG_H -I. -msse2 -fopenmp -pthread -fno-strict-aliasing -I./compat/jansson -DSCRYPT_KECCAK512 -DSCRYPT_CHACHA -DSCRYPT_CHOOSE_COMPILETIME -O3 -MT cudaminer-sha2.o -MD -MP -MF .deps/cudaminer-sha2.Tpo -c -o cudaminer-sha2.o `test -f ‘sha2.c’ || echo ‘./’`sha2.c
    mv -f .deps/cudaminer-sha2.Tpo .deps/cudaminer-sha2.Po
    g++ -DHAVE_CONFIG_H -I. -msse2 -fopenmp -pthread -fno-strict-aliasing -I./compat/jansson -DSCRYPT_KECCAK512 -DSCRYPT_CHACHA -DSCRYPT_CHOOSE_COMPILETIME -O3 -MT cudaminer-scrypt.o -MD -MP -MF .deps/cudaminer-scrypt.Tpo -c -o cudaminer-scrypt.o `test -f ‘scrypt.cpp’ || echo ‘./’`scrypt.cpp
    mv -f .deps/cudaminer-scrypt.Tpo .deps/cudaminer-scrypt.Po
    g++ -DHAVE_CONFIG_H -I. -msse2 -fopenmp -pthread -fno-strict-aliasing -I./compat/jansson -DSCRYPT_KECCAK512 -DSCRYPT_CHACHA -DSCRYPT_CHOOSE_COMPILETIME -O3 -MT cudaminer-scrypt-jane.o -MD -MP -MF .deps/cudaminer-scrypt-jane.Tpo -c -o cudaminer-scrypt-jane.o `test -f ‘scrypt-jane.cpp’ || echo ‘./’`scrypt-jane.cpp
    mv -f .deps/cudaminer-scrypt-jane.Tpo .deps/cudaminer-scrypt-jane.Po
    /usr/local/cuda/bin/nvcc -O3 -Xptxas “-abi=no -v” -arch=compute_10 –maxrregcount=64 –ptxas-options=-v -I./compat/jansson -o salsa_kernel.o -c
    nvcc fatal : Value ‘compute_10’ is not defined for option ‘gpu-architecture’
    make[2]: *** [salsa_kernel.o] Error 1
    make[2]: Leaving directory `/home/ubuntu/CudaMiner’
    make[1]: *** [all-recursive] Error 1
    make[1]: Leaving directory `/home/ubuntu/CudaMiner’
    make: *** [all] Error 2
    ubuntu@ip-10-236-152-203:~/CudaMiner$ nvcc fatal : Value ‘compute_10’ is not defined for option ‘gpu-architecture’
    The program ‘nvcc’ is currently not installed. You can install it by typing:
    sudo apt-get install nvidia-cuda-toolkit
    You will have to enable the component called ‘multiverse’


    • You will have to enable the component called ‘multiverse’

      Seems you need to enable it on sources file. Then run sudo apt-get install nvidia-cuda-toolkit

    • helpuljim says:

      In your Makefile change all occurrences of

      that did the trick for me.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s