Category Archives: Stuff

Stuff

Don’t Throw Out Those QC2.0 Chargers Just Yet!

Published by:

Nexus 5X and 6P are out, and there’s a fair amount of noise about lack of QC2.0 support, despite having Qualcomm chipsets. When I got my Nexus 5X, one of the first things I did was try to charge it with my QC2.0 charger. Lo and behold, the marshmallow lock screen declared “charging rapidly”. I then did an unscientific test, letting it charge from about 50% to 55% with the QC2.0 charger, then switching to the stock charger, and looking at the charging graph. There was no discernible change in the slope of the charging curve on the battery graph in settings. Great!

However, not all was well. The next day I plugged in at work, with the same model of charger but a different, longer cable of another brand. No “charging rapidly”. I assume there’s a degree of variance in the cables (both marketed as USB 3.1 compliant), but I need to try the original cable with the other charger to be sure.

Later, I began looking around online and found many reports that these phones are not QC2.0 compatible, and to expect slow charging from the adapters. So I plugged in my original charger/cable, and took some pictures with a kill-a-watt. Still not exactly scientific, but you can clearly see that I’m getting pretty good charging out of it (~13W), in fact slightly better than I was getting from stock (~11W). I let it charge on each one for about 5 minutes, taking a snapshot of what seemed like the rough average, and swapped back and forth between the two twice, with the same results.

The charger is an Aukey QC2.0 and the cable is a ‘cable matters’ 3ft type A to type C cable, both acquired from Amazon. These pictures are bad, I didn’t want to use the Nexus 5X to take the photos so I resorted to a Nexus 7 that was lying around. The kill-a-watt is set to measure watts.

IMG_20151021_220607

IMG_20151021_220125

IMG_20151021_220635

IMG_20151021_220931

IMG_20151021_221020

QC2.0 on Nexus 5X

IMG_20151021_221005

Stuff

How to tell Raspberry Pi from BeagleBone Black in C

Published by:

If you’ve got a cross-platform C or C++ program and want to compile it correctly for BeagleBone Black or Raspberry Pi, you can just use the C macro that corresponds with the ARM version. For example:

#include <stdio.h>

int main(int argc,char **argv)
{
#if defined __ARM_ARCH_6__
printf("Hello from RPI\n");
#elif defined __ARM_ARCH_7A__
printf("Hello from BBB\n");
#endif
return 0;
}

Stuff

Korean 27″ Displays

Published by:

I recently purchased two new 27″ displays from a Korean Ebayer. You can read all about them here and here.  Basically they’re the same panels that are in the nice Apple displays, but they didn’t quite make the cut for one reason or another, so they got sent to obscure Korean manufacturers who sell them in monitors for 1/4 the price.

I purchased Sunday afternoon for $319 each, shipping included, and 48 hours later they arrived. They both look remarkably good, I really had to hunt to find any discernible defect. One has two tiny stuck pixels and the other I *think* is just a hair dimmer than the other, which seemed to go away via brightness adjustment.

Really, though, this post is about how I got them to play nicely with the Nvidia TwinView on my Ubuntu desktop. You see, with the nouveau driver that is loaded by default, one worked fine, but to get the full acceleration and TwinView, I had to install the nvidia module, and for some reason it didn’t want to properly retrieve the monitor’s EDID. The result was a flickering 640 x 480 display; not pretty.

In troubleshooting, I noticed that ‘xrandr –prop’ would get the EDID nicely, but tools like get-edid from the read-edid package would return 128 bits worth of ‘1’s and complain of a corrupt EDID. X seemed to pick up the proper one when running the nouveau driver, and not when running the nvidia driver.

So I fired up a hex editor and pasted the EDID as reported via xrandr, all 128 bytes, and added a custom EDID file to my xorg.conf so the nvidia driver would work with the dual displays.

You can add the following to the screen section of /etc/X11/xorg.conf, just under metamodes or wherever you prefer.

Option “CustomEDID” “DFP:/etc/X11/shimian-edid.bin”

Note you can also do semicolon delimited for multiple displays (or so I’ve read):

Option “CustomEDID” “DFP-0:/etc/X11/shimian-edid.bin; DFP-1:/etc/X11/catleap.bin”

I’m including the QH270 EDID .bin file here, in case anyone is desperately looking for it or having a hard time creating one. It should be similar or even work as a drop-in replacement for the Catleap Q270, aside from the Achieva models.

Stuff

New Linux Device Mapper Snapshot Performance

Published by:

Here’s a quick weigh-in on the new experimental device-mapper thin provisioning (and improved snapshots) that exists in linux kernel 3.2. I recently compiled a kernel and tested it, and it looks rather promising. I would expect this to become stable more quickly than, say, btrfs (which obviously has different design goals but could also be used as a means of snapshotting files/vm disks). With any luck, LVM will be fitted for support relatively soon.

These are quick and dirty, sequential write test was with ‘dd if=/dev/zero of=testfile bs=1M count=16000 conv=fdatasync’, and results are in MB/s.  Random IO test was done with fio:
[global]
ioengine=libaio
iodepth=4
invalidate=1
direct=1
thread
ramp_time=20
time_based
runtime=300

[8RandomReadWriters]
rw=randrw
numjobs=8
blocksize=4k
size=512M

 

At any rate, it looks like we’re well on our way to high performance LVM snapshots that work well  (finally!).

 

 

Stuff

Ceph and RBD benchmarks

Published by:

Ceph, an up and coming distributed file system, has a lot of great design goals. In short, it aims to distribute both data and metadata among multiple servers, providing both fault tolerant and scalable network storage. Needless to say, this has me excited, and while it’s still under heavy development, I’ve been experimenting with it and thought I’d share a few simple benchmarks.

I’ve tested two different ‘flavors’ of Ceph, the first I believe is referred to as “Ceph filesystem”, which is similar in function to NFS, where the file metadata (in addition to the file data) is handled by remote network services and the filesystem is mountable by multiple clients. The second is a “RADOS block device”, or RBD. This refers to a virtual block device that is created from Ceph storage. This is similar in function to iSCSI, where remote storage is mapped into looking like a local SCSI device. This means that it’s formatted and mounted locally and other clients can’t use it without corruption (unless you format it with a cluster filesystem like GFS or OCFS).

If you’re wondering what RADOS is, it’s Ceph’s acronym version of RAID. I believe it stands for “Reliable Autonomous Distributed Object Store”. Technically, the Ceph filesystem is implemented on top of RADOS, and other things are capable of using it directly as well, such as the RADOS gateway, which is a proxy server that provides object store services like that of Amazon’s S3. A librados library is also available that provides an API for customizing your own solutions.

I’ve taken the approach of comparing cephfs to nfs, and rbd to both iscsi and multiple iscsi devices striped over different servers. Mind you, Ceph provides many more features, such as snapshots and thin provisioning, not to mention the fault tolerance, but if we were to replace the function of NFS we’d put Ceph fs in its place; likewise if we replaced iSCSI, we’d use RBD. It’s good to keep this in mind because of the penalties involved with having metadata at the server; we don’t expect Ceph fs or NFS to have the metadata performance of a local filesystem.

  • Ceph (version 0.32) systems were 3 servers running mds+mon services. These were quad core servers, 16G RAM. The storage was provided by 3 osd servers (24 core AMD box, 32GB RAM, 28 available 2T disks, LSI 9285-8e), each server used 10 disks, one osd daemon for each 2T disk, and an enterprise SSD partitioned up with 10 x 1GB journal devices. Tried both btrfs and xfs on the osd devices, for these tests there was no difference. CRUSH placement defined that no replica should be on the same host, 2 copies of data and 3 copies of metadata. All servers had gigabit NICs.
  • Second Ceph system has monitors, mds, and osd all on one box. This was intended to be a more direct comparison to the NFS server below, and used the same storage device served up by a single osd daemon.
  • NFS server was one of the above osd servers with a group of 12 2T drives in RAID50 formatted xfs and exported.
  • RADOS benchmarks ran on the same two Ceph systems above, from which a 20T RBD device was created.
  • ISCSI server was tested with one of the above osd servers exporting a 12 disk RAID50 as a target.
  • ISCSI-md was achieved by having all three osd servers export a 12 disk RAID50 and the client striping across them.
  • All filesystems were mounted noatime,nodiratime whether available or not. All servers were running kernel 3.1.0-rc1 on centos 6. Benchmarks were performed using bonnie++, as well as a few simple real world tests such as copying data back and forth.

ceph-nfs-iscsi-benchmarks.ods

The sequential character writes were cpu bound on the client in all instances; the sequential block writes (and most sequential reads) were limited by the gigabit network. The Ceph fs systems seem to do well on seeks, but this did not translate directly into better performance in the create/read/delete tests. It seems that RBD is roughly in a position where it can replace iSCSI, but the Ceph fs performance needs some work (or at least some heavy tuning on my part) in order to get it up to speed.

It will take some digging to determine where the bottlenecks lie, but in my quick assessment most of the server resources were only moderately used, whether it be the monitors, mds, or osd devices. Even the fast journal SSD disk only ever hit 30% utilization, and didn’t help boost performance significantly over the competitors who don’t rely on it.

Still, there’s something to be said for this, as Ceph allows storage to fail, be dynamically added, thin provisioned, rebalanced, snapshots, and much more, with passable performance, all in pre-1.0 code.  I think Ceph has a big future in open source storage deployments, and I look forward to it being a mature product that we can leverage to provide dynamic, fault-tolerant network storage.