In part 1, we looked at mining Litecoins on CPUs rented from Amazon EC2. Now, let us see if we can get better performance by mining Litecoins using GPUs.
Using CPUs, we were able to achieve an average hash rate of 144 KH/s using Amazon EC2’s c3.8xlarge instances, that come with 32 CPUs. Recently, Amazon made available their new generation of GPU instances, called g2.2xlarge, that provide access to NVIDIA GRID GPUs (“Kepler” GK104) each with 1,536 CUDA cores and 4GB of video memory. GPUs are supposed to provide better performance than CPUs when mining Litecoins. Is this true of the virtual computing instances provided by Amazon EC2? Let us find out.
1. Set Up g2.2xlarge Spot Instance on Amazon EC2
As before we need to have our AWS account and Litecoin mining pool worker set up.
Go to the EC2 Management Console, then click on Spot Requests on the left, then Request Spot Instances. You will be asked to choose an Amazon Machine Image (AMI). GPU instances require AMIs based on hardware-assisted virtualization (HVM), so make sure that you select a HVM AMI. The code below is for the Ubuntu Server 12.04.3 LTS for HVM Instances (64 bit) AMI, but you can choose another HVM AMI if you know how to set it up.
Next, choose the GPU instance of type g2.2xlarge, and configure it with the following:
- Number of instances: 1
- Purchasing option: check Request Spot Instances
- Maximum price: the maximum price you are willing to pay for the instance. A good rule of thumb is to use the current price shown on the screen, or slightly more. At the time of writing, g2.2xlarge Spot Instances were going for about $0.30.
You can use the default values for the other fields, or customise them if you are familiar with Amazon EC2.
Once you have configured the instance, click Launch. A dialogue box will appear, asking you to select or create a key pair. If you have already created an Amazon EC2 key pair, you can select it from the dropdown list. Otherwise, select Create a new key pair, enter a name for it (for example, “LTC”) and click Download Key Pair. You will get a file like LTC.pem, which contains the private key you will need to log in to your new server.
Wait a few minutes, and if your Spot Request was fulfilled successfully, you should see a new running instance in the Instances section of the EC2 Management Console.
2. Set up Required Software
Log in to your newly instantiated server using SSH. Windows users can use PuTTY. You will need the public URL of the server, which you can find from the EC2 Management Console (it will look something like ec2-xx-xx-xx-xx.yyy.compute.amazonaws.com).
chmod 400 LTC.pem ssh -i LTC.pem ubuntu@INSTANCE_PUBLIC_URL
Upgrade preinstalled packages (optional):
sudo apt-get update sudo apt-get upgrade
Install required packages:
sudo apt-get install build-essential libcurl4-openssl-dev git
Install NVIDIA drivers.
wget http://developer.download.nvidia.com/compute/cuda/5_5/rel/installers/cuda_5.5.22_linux_64.run sudo sh cuda_5.5.22_linux_64.run
Accept the license agreement, and install the NVIDIA driver and CUDA toolkit into the default location:
Do you accept the previously read EULA? (accept/decline/quit): accept Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 319.37? ((y)es/(n)o/(q)uit): y Install the CUDA 5.5 Toolkit? ((y)es/(n)o/(q)uit): y Enter Toolkit Location [ default is /usr/local/cuda-5.5 ]: Install the CUDA 5.5 Samples? ((y)es/(n)o/(q)uit): n
Use a text editor such as nano to add these lines to the end of your
export PATH=/usr/local/cuda-5.5/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda-5.5/lib64:$LD_LIBRARY_PATH
git clone https://github.com/cbuchner1/CudaMiner cd CudaMiner ./configure.sh make
3. Start Mining
cudaminer, using the pool URL, worker name and worker password from above:
./cudaminer --url=stratum+tcp://POOL_URL --userpass=WORKER_NAME:WORKER_PASSWORD -H 1 -C 1
-H 1 tells
cudaminer to distribute SHA256 hashing evenly to all CPU cores, and
-C 1 turns on the texture cache for mining, which should improve performance on Kepler GPUs.
You should see output similar to:
*** CudaMiner for nVidia GPUs by Christian Buchner *** This is version 2013-12-01 (beta) based on pooler-cpuminer 2.3.2 (c) 2010 Jeff Garzik, 2012 pooler Cuda additions Copyright 2013 Christian Buchner My donation address: LKS1WDKGED647msBQfLBHV3Ls8sveGncnm [2013-12-02 14:52:58] 1 miner threads started, using 'scrypt' algorithm. [2013-12-02 14:52:58] Starting Stratum on stratum+tcp://ltc.give-me-coins.com:80 [2013-12-02 14:53:22] GPU #0: GRID K520 with compute capability 3.0 [2013-12-02 14:53:22] GPU #0: interactive: 0, tex-cache: 1D, single-alloc: 1 [2013-12-02 14:53:22] GPU #0: Performing auto-tuning (Patience...) [2013-12-02 14:53:22] GPU #0: maximum warps: 459 [2013-12-02 14:55:35] GPU #0: 158.94 khash/s with configuration K32x14 [2013-12-02 14:55:35] GPU #0: using launch configuration K32x14 [2013-12-02 14:55:35] GPU #0: GRID K520, 14336 hashes, 0.09 khash/s [2013-12-02 14:55:35] GPU #0: GRID K520, 14336 hashes, 80.29 khash/s [2013-12-02 14:55:38] GPU #0: GRID K520, 430080 hashes, 154.46 khash/s [2013-12-02 14:55:44] accepted: 1/1 (100.00%), 154.46 khash/s (yay!!!) [2013-12-02 14:55:49] GPU #0: GRID K520, 1763328 hashes, 158.17 khash/s [2013-12-02 14:55:51] GPU #0: GRID K520, 286720 hashes, 151.47 khash/s [2013-12-02 14:55:54] accepted: 2/2 (100.00%), 151.47 khash/s (yay!!!) [2013-12-02 14:55:54] accepted: 3/3 (100.00%), 151.47 khash/s (yay!!!)
As you can see from the output above, the GPU is hitting about 158 KH/s. This is only slightly higher than what we achieved with CPU mining, but GPU instances cost $0.30 per hour, significantly less than $2.00 for CPU instances. Will this translate to profitable Litecoin mining? Unfortunately, no, it will just reduce our losses by an order of magnitude:
CPU + GPU Combo
What if we were to use both the CPU and GPU to mine? Will that give us a boost in performance? To find this out, we will use
screen, a useful tool that allows us to run different processes in separate terminal “screen”, and switch back and forth between them.
First, we need to install
minerd, and start
cd ~ wget http://sourceforge.net/projects/cpuminer/files/pooler-cpuminer-2.3.2.tar.gz tar -xzf pooler-cpuminer-2.3.2.tar.gz cd cpuminer-2.3.2/ ./configure CFLAGS="-O3" make cd .. screen
Start GPU mining:
cd ~/CudaMiner ./cudaminer --url=stratum+tcp://POOL_URL --userpass=WORKER_NAME:WORKER_PASSWORD -H 1 -C 1
Ctrl+a Ctrl+c to start a new screen, which we will use for CPU mining:
cd ~/cpuminer-2.3.2 ./minerd --url=stratum+tcp://POOL_URL --userpass=WORKER_NAME:WORKER_PASSWORD
Now, you can use
Ctrl+a Ctrl+a to toggle between the two screens. With this set up, I got about 140 KH/s from
CudaMiner and 36 KH/s from
minerd, for a combined 176 KH/s. This is only a slight increase in hashrate, and still does not make mining with Amazon EC2 GPU instances profitable.
We have tried renting virtual CPUs and GPUs from Amazon EC2 to mine Litecoins, with disappointing results. As expected, GPUs provide greater mining performance per dollar spent than CPUs, but mining Litecoins on Amazon EC2 is still not profitable. Mining is probably best left to those who have a lot of resources to buy and operate large quantities of mining hardware for economies of scale. Even so, profits are not assured, even with today’s record-breaking Litecoin prices. For the rest of us, buying Litecoins with cash would be the cheaper and more reliable way to get some Litecoins in our wallets.