I've dropped the ball over the last year but I do have an excuse. The combination of no one really using sqlchain, my wife of 16 years dying and me leaving my home in Thailand, just meant I didn't spend much time updating sqlchain or this blog. I didn't even renew the domain name last year. On the upside namecheap had a domain sale and I was prompted to get the name back. This led to my current splurge of energy to try and renew efforts to update everything. And there is actually quite a bit to document and update, along with a new direction over the next month.

In summary, last year from September to January I "lived" in the hospital caring for my wife while she had cancer. During December I also did a lot of new coding on sqlchain. I added altcoin and bech32 support along with numerous smaller new bits like test units and a dialog based config tool. Most of that was released at end of December but with very little documentation. On January 8th my wife died. So now I have a double whammy each year as I'll be remembering her on the 8th, right after celebrating the Bitcoin birthday on the 3rd.

One of my missions over the next while is to update here with blog posts describing the features and some of the restructuring that I did. In addition, looking forward, I'd like to replace the config tool again with a web based admin panel. This would allow configuring a new sqlchain install but be a useful admin panel to monitor system status, coin activity and resource use. The idea is that once you pip install sqlchain you could use your browser as interface for configuring and adding coins and monitoring db sync status. I have ideas about making it modular so that it could blend other system info like goaccess (web analytics) or logs into one server overview panel. That's probably down the road a bit.

For now I just wanted to post here and say I'm back and will be adding more posts to document the heap of new code pushed up last year. I think there are perhaps a dozen or so possible topics on my list now. Plus I have just added two new features over the last week which I'll need to explain more fully. If you are actually using sqlchain for something it would be very helpful if you commented or let me know. It would help spur development or with project direction.


I opened my Electrum wallet a few days ago and discovered a new donation. I don't know where or who it came from (isn't Bitcoin wonderful) but I want to send a BIG Thank You! to whoever has made donations, small or big. Even little bits really help me keep going on this project. I have quite limited discretionary resources. In this case seeing the new donation inspired me to actually splurge and pay for a new dedicated server. I'm in the process of re-syncing again, testing new code, and moving my other web activities over to the spiffy new home.

After much scanning of WebhostingTalk, and Googling, I found something I was happy with for $30 (in btc, of course). It's been up and running for a few days doing a new sync, and shortly I'll shutdown the Vultr instance after transitioning. To meet my needs it had to be cheap, have an SSD, sufficient cpu power, and accept bitcoin; which cut down on the options. Many bitcoin hosts I found were over priced. Here's what I got for this price from Reprise Hosting in the Seattle (Westin) data center:

Dual 4-Core Xeon L5520 CPU (16 threads) (dated but capable)
16 GB DDR3 RAM (nice)
120 GB Crucial M500 SSD (sufficient)
10TB BW - 100 Mbit port (yawn)
4 usable IPs
IPMI

Xeon Dual Quad The 100 Mbit port is the only thing I would have liked to see better for burst transfers now and then. I can live with this for now, and so far doing the sync it's been moving along nicely. I feel like I haven't been able to really utilize all those cores. Maybe they'll be nice to have later if I start to see some real traffic on the future Explorer site.

For your amusement here's a screen shot of the 16 cores barely more than idling while pumping txs.

I haven't been slacking off lately - just had to do a visa run (to Laos and back); keeping my stay here legit. I've been testing new tx threading code, split blobs for sig data and fallback code for Amazon S3 storage of the blobs (using Range header to pull small byte sequences as needed for api calls) with in-memory LRU cache.

Putting blobs up on S3 adds less then 1 second latency to calls that need that data, but cuts down greatly on server disk space. S3 cost is $0.023/GB/mo. and I expect to put about 30GB (only older blocks) offsetting the cost of a secondary hard disk for $0.69/mo.

Once again, Thank You to my kind benefactors. It's a big help.


I was watching htop for a while and noticed it was quite active for just monitoring the mempool. It seemed like it was working darn hard for just a few txs/second on the network. So reviewing the code I noticed it was looking up every tx in the mempool every time it cycled through, which is currently set at 5 seconds, and that's 46k odd lookups. Well, that's dumb. It should only lookup new txs in the mempool. I moved a few lines of code around to achieve this and ran some tests. Voila!

CPU For Mempool

I've also had occasional crashes where an RPC call returned "httplib.CannotSendRequest" exceptions. With a bit of Google I figured it was contention between threads causing this; when a new request is attempted before the last has responded. Not sure why, as they should each have their own connection; and yet there it is. So I'm currently testing some new code for an RPC Pool wrapper. This opens a few connections at the start and then they can be grabbed as needed. It also traps the errors and tries to repair failed connections. Previously a bad result could nuke one thread leaving others peddling away aimlessly. Hopefully this will be a step towards more robust behaviour.


It's finally up - a public demo page using the sqlchain api backend. I think there are a few issues still as I noticed a few example calls didn't produce expected data. I'll look into that and figure out what's going on. This demo is served from the Vultr 2GB instance as the Hudson Valley one is utterly incapable of catching up. I disabled sqlchain there and won't be using that system much except maybe as backup data.

The demo is reverse proxied behind nginx and using a free letsencrypt SSL certificate. It all seems to work well. I wasn't happy with how python 2.7 supports SSL - it requires the key file be readable by the user running the daemon since it needs to read the file for each request. In newer versions they read the key once at creation, which is good because after root reads it, then permissions can be dropped. Anyway, using nginx bypasses that issue and allows for more flexible configurations.

I've added a bit of info to the install guide covering this. Just follow a few install steps from the nginx web site first. Worked for me first time.

Here are some current database values on the demo server:

blocks.MYD      449,877 records         37 MB
address.MYD     217,470,124 records     5,871 MB
trxs.MYD        190,540,221 records     10,289 MB
outputs.MYD     529,494,252 records     15,355 MB  (blown away by how many records now)
mempool.MYD     44,436 records          499 KB

I guess we'll see how fast the demo page works. It's not exactly on a fast server - fairly modest even: 2 vCPU VPS with 2GB memory and 45 GB SSD costing 3 cents/hour or $20/month. Oh, there is a free 50 GB block volume mounted. Only recent transactions will show signature data (raw transaction api call decodes this data for html view) as I hollowed out most of the file. Only about 358 MB of witness data is online.

I've been looking around for a cheap dedicated server that accepts bitcoin. There are some very cheap offerings from OVH and their group of cohorts. They don't take bitcoin and I just don't like how much ID documents they require; plus reports are they take forever to verify them anyway. Hudson Valley has some cheap offerings but unfortunately I no longer trust they will be honest about server sharing / load.

The best I think I've seen yet is actually namecheap. That's a surprise because, even though I use them for domains, and I love that they accept bitcoin, in the past their hosting options were not very interesting to me. Now they offer a decent machine for pretty good price:

Xeon E3-1220 v3 4 Cores x 3.1 GHz
RAM 8 GB DDR3
HARD DRIVE 300 GB SSD  (spec'd as Intel DC S3500 by their support)
BANDWIDTH 10 TB

$58.88/mo

Excellent price for this much SSD space, and the CPU, though not stellar, is likely a usable amount of power; unlike some places offering Atom D525s as servers for this price. It's still a bit beyond my budget unless I have some revenue stream coming from the server. So I am working on putting something up that could potentially at least cover it's cost with adverts and/or user memberships.


I guess 2 GB is not enough for bitcoind when you have 1 GB allocated as key buffer for MySQL. A few hours after syncing I had a second crash and bitcoind stopped again. This time no biggie as I had a snapshot of the bitcoin directory saved a couple hundred blocks back. Simple copy and change of permissions and it was back up again.

sudo cp -r /var/data/bitcoin.449520 /var/data/bitcoin
sudo chown btc: /var/data/bitcoin

It was pretty clear to me this was probably because of lack of memory. It may have been the StdAlloc Exception error messages that gave it away. It plainly likes to have more than 1GB available; but how much more? Well, seems like around 103 MB will work - based on the last 24 hours trial.

After this crash I decided I'd better add a swap file for the VPS. Those with 4GB+ probably don't need this unless they have allocated too much to MySQL or other RAM hungry inhabitants. It takes mere seconds to set up, assuming you can spare 1 GB of disk space, but first check if you don't have swap space allocated already - many VPS servers come configured with zero. Use htop or free -m, and look for the swap values. To add a basic swap file:

sudo fallocate -l 1G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

And to make it mount at boot up, add this line to your /etc/fstab file at bottom:

/swapfile   none    swap    sw    0   0

And verify again with htop or free -m. This should really help bitcoind not lose it's mind when in borderline memory conditions. After 24 hours running like this the amount of swap used shows in htop as 103 MB. It must have crawled slowly up there; when I checked shortly after creation it only had 1 MB used. So far there have not been any more crashes - but that reminds me I should go make a more recent snapshot.

sudo stop sqlchain
sudo stop bitcoin
( view the log file, smart alec - also lets you see the actual last block )
sudo cp -r /var/data/bitcoin /var/data/bitcoin.449771   (for example)
sudo rm -rf /var/data/bitcoin.440633  (again as example, I keep two snapshots)

Be sure to stop bitcoind briefly, or files could be in an unstable state; which means also stop sqlchain. It currently doesn't behave very well when bitcoind goes away, or mysql for that matter; both fixes are on the todo list.


Now syncing again. I copied bitcoin data to the slower VPS and started sqlchain on both systems processing blocks. Here is a snapshot of the first minutes on each.

First the cheaper ($5/mo.) VPS with 300GB RAID-10 storage:

4vCPU-4GB-300GB RAID-10 (MYI+MYD) (Hudson Valley Host)

sqlchain - daemon.log:
23-01-2017 18:39:17 Using rpc mode. Monitoring blocks / mempool.
Block 449396 [ Q:8 1434 txs - 22-01-2017 - 936ms 211.7s   7 tx/s]
Block 449397 [ Q:8  895 txs - 22-01-2017 - 722ms 67.3s  13 tx/s]
Block 449398 [ Q:8  343 txs - 22-01-2017 - 209ms 212.4s   2 tx/s]
Block 449399 [ Q:8  181 txs - 22-01-2017 -  94ms 31.9s   6 tx/s]
Block 449400 [ Q:8  359 txs - 22-01-2017 - 176ms 41.4s   9 tx/s]
Block 449401 [ Q:8 1094 txs - 22-01-2017 - 229ms 76.7s  14 tx/s]
Block 449402 [ Q:8  620 txs - 22-01-2017 - 141ms 200.2s   3 tx/s]
Block 449403 [ Q:8 2321 txs - 22-01-2017 - 480ms 164.5s  14 tx/s]

MySQL - show status (2048 MB key buffer)
| Key_blocks_not_flushed                   | 0           |
| Key_blocks_unused                        | 1690322     |
| Key_blocks_used                          | 24414       |
| Key_read_requests                        | 120304      |
| Key_reads                                | 24294       |
| Key_write_requests                       | 10368       |
| Key_writes                               | 10359       |

Now the SSD based VPS (~$27/mo, 45GB SSD + 75 GB block storage):

2vCPU-2GB-45GB SSD (MYI) + 75GB block storage (MYD) (Vultr)

sqlchain - daemon.log:
23-01-2017 11:40:24 Using rpc mode. Monitoring blocks / mempool.
Block 449396 [ Q:8 1434 txs - 21-01-2017 - 301ms 9.5s 150 tx/s]
Block 449397 [ Q:7  895 txs - 21-01-2017 - 228ms 3.1s 287 tx/s]
Block 449398 [ Q:8  343 txs - 21-01-2017 -  90ms 16.1s  21 tx/s]
Block 449399 [ Q:7  181 txs - 21-01-2017 -  66ms 1.9s  93 tx/s]
Block 449400 [ Q:6  359 txs - 21-01-2017 -  76ms 1.2s 305 tx/s]
Block 449401 [ Q:7 1094 txs - 21-01-2017 - 161ms 3.4s 319 tx/s]
Block 449402 [ Q:8  620 txs - 22-01-2017 -  92ms 5.3s 117 tx/s]
Block 449403 [ Q:8 2321 txs - 22-01-2017 - 395ms 6.6s 354 tx/s]
Block 449404 [ Q:8 1702 txs - 22-01-2017 - 2034ms 35.6s  48 tx/s]
Block 449405 [ Q:7   99 txs - 22-01-2017 -  49ms 0.4s 259 tx/s]
Block 449406 [ Q:8 1286 txs - 22-01-2017 - 483ms 12.5s 103 tx/s]

MySQL - show status (1200 MB key buffer)
| Key_blocks_not_flushed                   | 8626        |
| Key_blocks_unused                        | 499465      |
| Key_blocks_used                          | 507801      |
| Key_read_requests                        | 3064005     |
| Key_reads                                | 503180      |
| Key_write_requests                       | 325917      |
| Key_writes                               | 249240      |

Presumably less CPU power and RAM but ~ 20x block rate, though CPU power is hard to compare equivalence between VPS. On the hard disk system htop shows CPU iowait bars about 5x higher than active CPU bars. The slower VPS does eventually start to get better as it's key buffer gets filled - it does have more memory and is configured to use it.

The SSD-VPS caught up to block 449,649 in 52 minutes by which time the HD-VPS was at block 449,413. Here's what the SSD-VPS looks like after catching up. Why so fast? It monitors the mempool and adds txs as they come in. So when a new block arrives it simply has to update the relevant txs with the new block number - very quick.

Block 449652 [ Q:0 1418 txs - 23-01-2017 - 289ms 0.5s 2582 tx/s]
Block 449653 [ Q:0 1521 txs - 23-01-2017 - 282ms 0.7s 2321 tx/s]
Block 449654 [ Q:0 1647 txs - 23-01-2017 - 372ms 3.6s 452 tx/s]
Block 449655 [ Q:0 2191 txs - 23-01-2017 - 373ms 1.1s 2001 tx/s]
Block 449656 [ Q:0 2382 txs - 23-01-2017 - 380ms 1.0s 2486 tx/s]
Block 449657 [ Q:0 2471 txs - 23-01-2017 - 416ms 1.0s 2419 tx/s]

For about 75 blocks yesterday I had a fully sync'd MySQL DB of the blockchain - as of block 439,395 BC (before corruption). Here are some numbers at that time:

124 GB Full Disk Size, consisting of:

    3.1 GB Bitcoin blockchain (pruned, mostly 2.7GB of chainstate)

    57 GB MySQL DB of which,

        30 GB is MYD data files
        27 GB is MYI index files (can be rebuilt from data)

    And then there is the "witness" data (signatures)

    64 GB blobs.dat (in the sqlchain directory)

The 27GB of index files needs to be on fast SSD storage during a full sync. The blockchain and data files can get by fairly well on spinning platters. And the blobs data could be offline if need be. There is a fancy trick I figured out for offline witness data, but more on that later. Going forward now I'll be testing with all data on a 300GB RAID-10 hard disk to see how well it performs for the demo API server.

The time to sync was really hard to pin down because I had several changes in operating conditions over the sync period, and my record keeping was utterly un-scientific. My recollection is taking about 5 days to reach block 350,314 on the 4vCPU-4GB-300GB VPS resulting in the layout below before I then rsync'd over to the (semi) dedicated server.

@sql 350314 / btc 351040
pruned 250 of 252 files
dux /var/data
19G     /var/data/mysql
776K    /var/data/www
1.3G    /var/data/bitcoin
21G     /var/data/sqlchain
41G     /var/data

During most of that time I had bitcoind blocked with iptables rules (blkbtc utility) so it would respond to RPC calls without being able to add new blocks. It downloads and verifies much faster than sqlchaind can add them to the MySQL DB. Yes, bitcoind gets so gung-ho on verifying blocks that there is little CPU time left for much else, and if sqlchaind cannot process blocks then they don't get pruned - which was a main focus of my test case here.

After the rsync to the faster SSD based system, processing blocks sped up by a factor of 5-8 times. I altered the code to use two threads with queueing for inserting SQL data; which helped make better use of the multiple cores. From my billing record this system ran for 102 hours but some of that time was a rebuild due to a screw up I made during the testing of multi-thread code. I lost around 24 hours on that, leaving 78 hours to actually finish the sync. All told, about 8 days for a full sync, though if started on a faster system I'd bet on more like 4-5 days.

In comparison it seems a regular sync of bitcoind can take as little as 8 hours and currently uses either 108 GB of disk space or, with txindex turn on, around 226 GB. So overall I'm pretty happy with this. I get a lot of query functionality for about half the size of the txindex blockchain. And witness data purging is still an option for reducing space. Removing blobs.dat cuts out 64 GB giving a nice lean 60 GB queryable database.

And talking about blobs, here is the trick I worked out. If, for example, you are on a VPS with < 100 GB SSD available and don't want to splurge on double that (seems these choices go up double each price step) then it's possible to have your cake and eat it. You want to keep that disk space for MySQL data but not entirely throw out the witness data because maybe later you will use it. As you near full capacity during the sync you can briefly stop sqlchaind, and rename blobs.dat (eg. blobs.0.dat) and copy that offline. Now use the truncate command to create a sparse zero byte file of exactly the same size, eg.

truncate -s 34765456543 blobs.dat

This file takes up zero disk space but holds the positional state for new sig data appended. It's a "hollow" blob. As more data is added you can repeat the process again but to avoid copying that many zeros bytes across the net I'd suggest first copying only the new data off the top (to, eg. blobs.1.dat with dd) and again making a new truncate file for the new total size.

dd if=blobs.dat iflag=skip_bytes skip=34765456543 bs=1M of=blobs.1.dat

Essentially you can log witness data to cheap storage as you go using very little of the limited VPS space. Now, if I had a 512 GB SSD sitting in a dedicated colo server I wouldn't bother with all this rubbish but then I wouldn't be pushed to figuring out these nifty workarounds.

The Vultr server cost me $5 to run the tests. I deposited $5 via Paypal (they do take bitcoin but I haven't needed to fund it more yet), and they gave me a $20 bonus trial deposit. After spinning up 6 different servers and mounting 3 various (100,50,25 GB) block storage volumes I've used a total of $12. Slow hand clap.

As I write this I am nearing completion of the blockchain for a second time and will finally be restarting sqlchaind and moving back to the smaller VPS to launch the demo API server.


I experienced something new yesterday. Bitcoin can get corrupted, and I don't mean the political climate. I had a full sync done at block 439,321 and sometime later was moving block data over to the slower VPS system to test when I noticed that bitcoind was stopped. I mean I started it and a few seconds later it stopped itself. A short exploring of the debug.log showed me it had been corrupted some time before at block 439,395 and had been spewing log errors for a while. When I stopped bitcoind to move the data I had not noticed, and copied corrupt data over to the other system overwriting what was actually non-corrupted blocks. I guess one should always check the debug log before assuming it's in a healthy state. So, copy done, went to start again and seems bitcoind will run with errors but not start fresh with errors. Bam!

The problem with a pruned blockchain is that any corruption/failure means starting from block zero again. Ouch. Fortunately, since mysql data is detached from bitcoind data it was not affected and just has to wait for re-sync. And the bonus to a pruned blockchain, and my lesson for today, is that at only about 2GB it's not unwise to keep checkpoint copies of the bitcoin directory as a safe guard. If the chain is corrupted, then swap in a good recent one and catch up again. Corollary: check your backup debug.log to make sure your checkpoints aren't corrupted before rolling over.

I'm using this as an excuse for delaying my Full-Sync Report - I am working on it.


Linux, Electronics, Open Source Programming, Bitcoin, and more

© Copyright 2018 neoCogent. All rights reserved.

About Me - Hire Me