Welcoming the OpsTool SIG

It gives me great pleasure to welcome Matthias Runge, Rich Megginson and the entire community to the OpsTools Special Interest Group in CentOS.

The OpsTools SIG, as they are known, are a group focused on building, testing and delivering tools typically used in operations roles in various types of infrastructure including Cloud deployments, management around distributed apps, logging and monitoring. The initial plan is to focus on three areas as follows:

  • Performance monitoring: Collectd – Graphite – Grafana
  • Availability monitoring: Sensu – Uchiwa
  • Centralised logging: Fluentd – Elasticsearch – Kibana

In order to achieve their goals, the plan is to work extensively with the other established Special Interest Groups, including the Cloud SIG, the PaaS Sig, the Config Management SIG and the Atomic SIG. The aim being to complement the excellent work already being done in those space – and also to help integrate some of the tooling they are working with. As an example, curating config management snippets that can then be consumed by the config management SIG as they curate the actual tools like Ansible and Puppet.

The initial scope of the work is laid out in their wiki landing page at : https://wiki.centos.org/SpecialInterestGroup/OpsTools ; however they are very open to contributions that complement this work, including new projects code being added in, improvements to the stacks they are working with, traction into other architectures etc. Even if you have some ideas on what we can do be doing better, differently or even in addition to the initial scope, we’d like to hear from you. So come join us at the CentOS-Devel list at https://lists.centos.org/mailman/listinfo/centos-devel and say hi.

Over the coming days, we should have the build tags for the OpsTools SIG on https://cbs.centos.org, and we should have content flowing through. Keep your eyes on the lists for that.

Regards,

Welcome CDN77.com as a CentOS Project Sponsor

Over the years, the CentOS Project infra has exclusively been run on donated hardware, managed by the CentOS infra team from end to end. Most of the edge parts of the content delivery have happened from third party mirrors – these third party mirrors have helped massively in ensuring that we’re able to deliver content rapidly, in a verifiable way, across the world to any yum operation run from a CentOS Linux machine.

Since this network is so focused on delivering end user content, and only released signed content – when we setup buildlogs.centos.org as a way for developers and users to see early content, just fresh from the Community Build service, or from content that people were working on: we decided to not put this content on the wide mirror network. There is a much smaller consumption base for this content, and the price on disk for mirror content is very high for content that does not otherwise see much movement. This model worked fine, we ran a few machines behind buildlogs.centos.org for almost a year and a half before we started hitting capacity issues. Mostly network latency in some areas of the world were poor from the US and EU, where we ran these machines from.

It was at this time that Oskar Gottlieb from CDN77.com got in touch offering to help seed some of our content! We were more than happy to take this up, but had to work through the difference in how their system works V/s what we had in place at the time. Then after a brief test, Fabian announced our move to serving content from CDN77 for all buildlogs.centos.org binary content. You can read this announcement here : https://lists.centos.org/pipermail/centos-devel/2016-March/014552.html

What we are effectively doing is : serve the repodata and metadata for the content from buildlogs.centos.org backed machines run from the CentOS resources, managed by the CentOS infra team. However, any content ( rpms, images etc ) that anyone gets, are offloaded to the CDN77 network across the world.

This has been hugely beneficial for us : it reduced the amount of resources we needed to continue to meet the growing demand for content from here, and it also allowed our associate SIG projects to rapidly seed out devel and testing content ( eg. the RDOProject offloads their tripleo images to buildlogs.centos.org, that are then also served from the CDN77.com network ).

I’d like to welcome CDN77.com to our sponsor network. Tts sponsors like them who’ve kept the project alive and well resourced up over the years – its a network we continue to rely on extensively. If you use CentOS Linux and would like to join the sponsor network, please get in touch.

regards,

CentOS Project group on Facebook is over 20k users

centos-facebook-members

The CentOS Project’s facebook group at https://www.facebook.com/groups/centosproject/ just went over 20,000 users ( its at 20,038 at the moment ). Great milestone and many thanks for all the support. Large chunks of credit goes to Ljubomir Ljubojevic ( https://www.facebook.com/ljubomir.ljubojevic ) for his time and curating the admin team on the Facebook group.

And a shout out to Bert Debruijn, Wayne Gray, Stephen Maggs, Eric Yeoh and the rest of the 20k users. Well done guys, next step is the 40k mark, but more importantly – keep up the great help and community support you guys provide each other.

Regards,

Forcing CPU speed

Most of the time tuned-adm can set fairly good power states, but I’ve noticed that when I want powersave as the active profile, to try and maximize the battery life – it will still run with an ondemand governor; In some cases, eg when on a plane and spending all the time in a text editor, thats not convenient ( either due to other apps running or when you really want to get that 7 hr battery life ).

On CentOS Linux 7, you can use a bit of a hammer solution in the form of /bin/cpupower – installed as a part of the kernel-tools rpm. This will let you force a specific cpu range with the frequency-set command with -d (min speed) and -u (max speed ) options, or just set a fixed rate with -f. As an example, here is what I do when getting on the plane

/bin/cpupower frequency-set -u 800Mhz

Stuff does get lethargic on the machine, but at 800Mhz and with all the external devices / interfaces / network bits turned off – I can still squeeze about 5 hrs of battery life from my X1 Carbon gen2 which has:

model: 45N1703
voltage: 14.398 V
energy-full-design: 45.02 Wh
capacity: 67.3701%

Ofcourse, you should still set “tuned-adm profile powersave” to get the other power save options, and watch powertop with your typical workload to get an idea on where there might be other tuning wins. And if anyone has thoughts on what to do when that battery capacity hits 50 – 60%… it does not look like the battery on this lenovo x1 is replaceable ( or even sold! ).

regards,

Getting Started with CentOS CI

We have been building out a CentOS Community CI infra, that is open to anyone working on infra code or related areas to CentOS Linux, and have now onboarded a few projects. You can see the web ui ( jenkins! ) at https://ci.centos.org/.

Dusty has also put together a basic getting started guide, that also goes into some of the specifics on how and why the CentOS CI infra works the way it does, check it out at http://dustymabe.com/2016/01/23/the-centos-ci-infrastructure-a-getting-started-guide/.

Regards,

Few changes in CentOS Atomic Host build scripts

hi,

If you use the CentOS atomic host downstream build scripts at https://github.com/CentOS/sig-atomic-buildscripts you will want to note a major change in the downstream branch. The older build_ostree_components.sh script has now been replaced with 3 scripts:
builds_stage1.sh, build_stage2.sh and build_sign.sh; Running build_stage1.sh followed by build_stage2.sh will give you exactly the same output as the old script used to.

The third script, build_sign.sh, now makes it easier to sign the ostree repo before any of the images are built. In order to use this, generate or import your gpg secure key, and drop the resulting .gpg file into /usr/share/ostree/trusted.gpg.d/ and edit the build_sign.sh script, edit the keyid at the end, and run the script after your build_stage1.sh is complete ( and before you run the build_stage2.sh ). You will notice a pinentry window popup, enter the password, and check for a 0 exit. Note that the gpg sign is a detached sign for the ostree commit.

regards,

CentOS Meetup in London 3rd Dec 2015

Hi,

We now have a CentOS Users and contributors group for the UK on meetup.com ( http://www.meetup.com/CentOS-UK/ ), and I hosted the inaugural meetup over beer a few days back. It was a great syncup, and lots of very interesting conversations. One thing that always comes through at these meetings and I really appreciate is the huge diversity in the userbase, and the very different viewpoints and value propositions that people focus on into the CentOS Linux platform, and the larger ecosystem around it.

The main points that stuck with me over the evening were the CentOS Atomic Host ( https://wiki.centos.org/SpecialInterestGroup/Atomic/Download ) and the CentOS on ARM devices ( and the general direction of where ARM devices are going ). Stay tuned for more info on that in the next few weeks.

Looking forward now to the next London meetup ( likely 2nd week of Jan ’16 ), and also joining some meetings in other parts of the UK. Everyone is welcome to join, and I could certainly use help in organising meetups in other places around the UK. See you at a CentOS meetup soon.

Regards,

The portable cloud

In late 2012 I constructed myself a bare bones cluster of a couple of motherboards, stacked up and powered, to be used as a dev cloud. It worked, but was a huge mess on the table, and it was certainly neither portable nor quiet. That didnt mean I would not carry it around – I did, across the atlantic a few times, over to Asia once. It worked. Then in 2014 I gave the stack away. Which caused a few issues, since living in a certain part of London means I must put up with a rather sad 3.5mbps adsl link from BT. Had I been living in a rural setting, government grants etc would ensure we get super high speed internet, but not in London.

I really needed ( since my work pattern had incorporated it ), my development and testing cluster back. Time to build a new one!

Late summer last year the folks at Protocase kindly built me a cloud box, to my specifications. This is a single case, that can accommodate upto 8 mini-itx (or 6 mini-ATX, which is what i am using ) motherboards, along with all the networking kit for them and a disk each. Its not yet left the UK, but the box is reasonably well traveled in the country. If you come along to the CentOS Dojo, Belgium or the CentOS table at Fosdem, you should see it there in 2016. Here you can see the machine standing on its side, with the built in trolley for mobility.

Things to note here : you can see the ‘back’ of the box, with the power switches, the psu with its 3 hot swap modules, the 3 large case cooling fans and the cutout for the external network cable to go into the box. While there is only 1 psu, the way things are cabled inside the box, its possible to power upto 4 channels individually. So with 8 boards, you’d be able to power manage each pair on its own.

Box-1

Here is the empty machine as it was delivered. The awesome guys at Protocase pre-plumbled in the psu, wired up the case fans ( there are 3 at the back, and 2 in the front. The ones in the front are wired from the psu so run all the time, where as the back 3 are connected as regular case-fan’s onto the motherboards, so they come up when the corresponding machine is running ) – I thought long and hard about moving the fans to the top/bottom but since the machine lives vertically, this position gives me the best airflow. On the right side, opposite from the psu, you can see 4 mounting points, this is where the network switch goes in.
Box-2

Close up of the PSU used in this machine, I’ve load tested this with 6x i5 4690K boards and it works fine. I did test with load, for a full 24 hrs. Next time I do that, I’ll get some wattage and amp readings as well. Its rated for 950w max. I suspect anything more than 6 boards will get pretty close to that mark. Also worth keeping in mind is that this is meant to be a cloud or mass infra testing machine, its not built for large storage. Each board has its own 256gb ssd, and if i need additional storage, that will come over the network from a ceph/gluster setup outside.
Box-3

The PSU output is split and managed in multiple channels, you an see 3 of the 4 here. Along with some of the spare case fan lines.
Box-4

Another shot of the back 3 fans, you can also see the motherboard mounting points built into the base of the box. They put these up for a mini-itx / mini-ATX as well as regular ATX. I suspect its possible to get 4 ATX boards in there, but its going to be seriously tight and the case fans might need an upgrade.
Box-5

Close up of the industrial trolley that is mounted onto the box ( its an easy remove for when its not needed, i just leave it on ).
Box-6

The right side of the box hosts the network switch, this allows me to put the power cables on the left and back, with the network cables on the right and front. Each board has its own network port ( as they do.. ), and i use a usb3 to gbit converter at the back to give me a second port. This then allows me to split public and private networks, or use one for storage and another for application traffic etc. Since this picture was taken, I’ve stuck another 8 port switch on the front of this switch’s cover, to give me the 16 ports i really need.
Box-7

Here is the rig with the first motherboard added in, with an intel i5 4960k cpu. The board can do 32 gb, i had 16 in it then, have upgraded since.
Box-8

Now with everything wired up. There is enough space under the board to drive the network cables through.
Box-9

And with a second board added in. This time an AMD fx-8350. Its the only AMD in the mix, and I wanted one to have the option to test with, the rest of the rig is all intels. The i5’s are a fewer cores, but overall with far better power usage patterns and run cooler. With the box fully populated, running a max load, things get warm in there.
Box-10

The boards layer up on top of each other, with an offset; In the picture above, the intel board is aligned to the top of box, the next tier board was aligned to the bottom side of the box. This gives the cpu fans a bit more head room, and has a massive impact on temperature inside the box. Initially, I had just stacked them up 3 on each side – ambient temperature under sustained load was easily touching 40 deg C in the box. Staggering them meant ambient temperature came down to 34 Deg C.

One key tip was Rich Jones discovering threaded rods, these fit right into the mother board mounting points, and run all the way through to the top of the box. You can then use nuts on the rod to hold the motherboard at whatever height you need.

If you fancy a box like this for yourself, give the guys at Protocase a call and ask for Stephen MacNeil, I highly recommend their work. The quality of the work is excellent. In a couple of years time, I am almost certainly going to be back talking to them about the cloudybox2. And yes, they are the same guys who build the 45drives storinator machine.

update: the box runs pretty quiet. I typically only have 2 or 3 machines running in there, but even with all 6 running a heavy sustained load, its not massively loud, the airflow is doing its thing. Key thing there is that the front fans are set to ingest air – and they line up perfectly with the cpu placements, blowing directly at the heat sinks. I suspect the top most tier boards only get about 50% of the airflow compared to the lower two tiers, but they also get the least utilisation of the lot.

enjoy!

CentOS Linux 5 Update batch rate

Hi,

We typically push updates in batch’s. This might be anywhere from 1 update rpm to 100’s ( for when there is a big update upstream ), however most batches are in the region of 5 to 20 rpms. So how many batches have we done in the last year in a bit ? Here is a graph depicting our update batch release rate since Jan 1st 2014 till today.

cl5-update-batch-rate

I’ve removed the numbers from the release rate, and left the dates in since its the trending that most interesting. In a few months time, once we hit new years I’ll update this to split by year so its easy to see how 2015 compared with 2014.

You can click the image above to get a better view. The blue segment represents batches built, and the orange represents batches released.

regards,

CentOS Atomic Host in AWS via Vagrant

Hi,

You may have seen the announcement that CentOS Atomic Host 15.10 is now available ( if not, go read the announcement here : http://seven.centos.org/2015/10/new-centos-atomic-host-release-available-now/ ).

You can get the Vagrant box’s for this image via the Atlas / VagrantCloud process or just via direct downloads from http://cloud.centos.org/centos/7/atomic/images/ )

What I’ve also done this time is create a vagrant_aws box that references the AMIs in the regions they are published. This is hand crafted and really just a PoC like effort, but if its something people find helpful I can plumb this into the main image generation process and ensure we get this done for every release.

QuickStart
Once you have vagrant running on your machine, you will need the vagrant_aws plugin. You can install this with:

vagrant plugin install aws

and check its there with a

vagrant plugin list“.

You can then add the box with “vagrant box add centos/atomic-host-aws“. Before we can instantiate the box, we need a local config with the aws credentials. So create a directory, and add the following into a Vagrantfile there :

Vagrant.configure(2) do |config|
  config.vm.box = "centos/atomic-host-aws"
  config.vm.provider :aws do |aws, override|
    aws.access_key_id = "Your AWS EC2 Key"
    aws.secret_access_key = "Your Secret Key"
    aws.keypair_name = "Your keypair name"
    override.ssh.private_key_path = "Path to key"
  end
end


Once you have those lines populated with your own information, you should now be able to run
vagrant up --provider aws

It takes a few minutes to spin up the instance. Once done you should be able to “vagrant ssh” and use the machine. Just keep in mind that you want to terminate any unused instances, since stopping will only suspend it. A real vagrant destroy is needed to lose the ec2 resources.

Note: this box is setup with the folder sync’ feature turned off. Also, the ami’s per region are specified in the box itself, if you want to use a specific region just add a aws.region = ““, into your local Vagrantfile, everything else should get taken care of.

You can read more about the aws provider for vagrant here : https://github.com/mitchellh/vagrant-aws

Let me know how you get on with this, if folks find it useful we can start generating these for all our vagrant images.