Sep 28

Go executables are statically linked, except when they are not.

Generally GO executables are advertised as statically linked.

And they are, mostly, except for those times where they aren’t.

This is about one of those times, and what I discovered in the process.

Read the rest of this entry »

Sep 16

Docker Volumes Quirk

I discovered something interesting today regarding Docker and volumes — there’s a problem below with the call to run the private docker registry container:

docker run -d -v /opt/registry-cache/:/tmp/registry -p 5000:5000 registry

If you run that and curl -s http://localhost:5000, nothing is returned.

As it turns out, the trailing slash in /opt/registry-cache/ causes issues — the web proxy starts up, but the actual registry doesn’t run. In order to get it to work, the following needs to be done:

docker run -d -v /opt/registry-cache:/tmp/registry -p 5000:5000 registry

It’s amazing the difference a single character can make. Remove the extra '/' and it works as expected.

Jul 27

Using Openstack/Devstack Floating IPs from outside

I hope that this is of use to someone. After many hours of tearing down, building up, programming routers, etc., I’ve figured out why my devstack wasn’t allowing access from floating ips. I knew that the address was resolving (arp -a); I just couldn’t ssh to it (or do anything else with it)

It was the security rules.

So, I went in to the security rules in horizon and added the following rules to the default:

  1. Allow all CIDR ICMP traffic
  2. Allow all CIDR TCP traffic
  3. Allow all CIDR UDP traffic

Generally I wouldn’t advise opening it up totally — I’d open up applicable ports.  However, I’m happy to have it working!

Aug 01

Cloudy Update

I’ve been working with Docker a good bit and have updated my list of tools.  Here’s a quick dump of where I am in the design of the infrastructure.

  • collectd will be used to monitor cgroup statistics.  This necessitates compiling all or part of collectd — the current packages do not contain the cgroup plugin. This will run in the hosts which run the containers.  Additionally, information about the hosts will be collected.  Thresholds will be used to send alerts, scale up or down services, etc..  Graphite or some other tool will provide graphs for the dashboards.
  • A log aggregation tool will capture logs from the various containers.  I’m considering logstash due to the large number of inputs which are already defined.  OpenTSDB is another option — it looks like it may be more poweful in some senses, but more difficult to configure in others.  My main concern with both is that they’re java based and in the case of logstash it requires a java collector running in each container and even though it’s a default sized jvm, assume it will require ~128 mb for each container.  That adds up quickly and I’d rather have something lighter.  I’ve not done enough research on OpenTSDB to speak to what it uses.
  • Dynamic DNS will be used to register the various services.  At present I am leaning toward Power DNS using PyPdnsRedis as a pipe backend  and redis to store the dynamic data.
  • Nginx will sit in front of hipache which is a dynamic proxy/load balancer which uses redis. I have not decided whether to use the node flavour of hipache or the one embedded in nginx — that needs to be tested. Calls to the services will be routed through the proxy.
  • An image repository will be available for local hosting of images.
  • A web-based front end to configure hosts, services, and provide a dashboard view.  Given the many other pieces which are using redis, I am seriously contemplating using it to store the data used to define services and hosts.  I’m not 100% sure of this yet, however.
  • Services consist of a particular process, such as a restful service running in a web server, or a jvm, or …. In their definition, the following information will be stored:
    • The name of the service
    • Ports which need to be exposed
    • Whether the service is active
    • The image
    • Dependencies – particularly services which need to be running prior to the start of this service
    • The minimum and maximum number of instances of the service
    • Threshholds
    • System requirements (cpu, memory)
    • Heartbeat – this is in addition to the thresholds
    • There may be a sort of inheritance to help cut down on duplicate information.
  • Hosts run the docker daemon and host containers.  Their information consists of:
    • Name of the host
    • IP address (may be dynamic)
    • Is the host available
    • Does the host need to be started (is it out in the public cloud)
    • Any cloud information needed to start it.
    • Priority – this determines the order in which containers and services are started – I anticipate that private hosts will have priority over public cloud hosts — due to expense; it makes sense to have overflow go to the public cloud.
    • Server specs
    • Currently available resources — this can be grabbed, in part from collectd.
  • There may be a discovery process, akin to the old Sun Jini project whereby a service can advertise itself and other services can utilize that service.  I could see this being used, for instance, if a service needs a cache.  Databases likely would not be as useful.

A picture will follow soon.  However, a good bit of the work’s already been done — where possible I’m integrating existing tools and projects.  Obviously the front end will need to be written.

I’ve decided that I’m going to repurpose my nimblestratus project on github — it’s not like I’d really done much of anything with it.  Unfortunately someone registered nimblestratus.com two weeks ago, but .org is still available.

I’m pretty excited, though — the pieces/parts are coming together in my head and I believe that this is do-able in fairly short order.  I really, really want to have a simple proof of concept done in the next couple of weeks, or by the end of August — I’m going to Cloud Develop and would love to have something for a “show-and-tell”.

Jul 23

On demand containers

I’ve been interested in Linux Containers for quite a while; I think that they have their sweet spot where they are better than virtual instances — in particular they require less resources.

I am working on a framework to use linux containers for on-demand computing — increasing or decreasing instances of applications as needed.  I envision it being used for things such as:

  • JBoss or other application servers
  • Internet Applications
    • Web
    • Rails
    • node.js
  • Databases
  • Caches
  • And more

At this point I’m planning to use the following:

  • Docker
  • HAProxy
  • TBD Monitor
  • TBD Management (probably written by my self)

At this point, I think the key is in being able to dynamically assign units to HAProxy.  HAProxy: Reloading Your Config with Minimal Service Impact describes a method for activating changes to HAProxy.  Unfortunately there isn’t a good API provided by HAProxy to add/remove hosts as necessary.  I’m wanting to be able to support both public and private clouds, with the idea being to have the ability to add resources from public clouds when private cloud resources are depleted.

Why am I doing it?  For one, well, it’s interesting to me. For another I haven’t heard of any open source tools/frameworks which do this.  Additionally I’ve seen many instances where virtual instances have been over provisioned or sit idle wasting resources, particularly when jvm’s are running inside a virtual instance.

There’s still a number of unknowns, but I think it’s workable.

Jun 22

cygwin and torquebox and rvm, oh my!

rvm, despite being wonderful, doesn’t play very well with torquebox under cygwin. In particular, the gem paths are not working. So, in order to fix this, simply do:

rvm use system

And then it will work right. Once you’re done with the torquebox work, you can go back to using rvm.

Nov 04

JBoss Client Jars for Messaging

Prior to JBoss 5, the jboss-all-client.jar was pretty much all you needed. However, the JBoss 5 Getting Started Guide states the following:

The client/jbossall-client.jar library that used to bundle the majority of jboss client libraries, is now referencing them instead through the Class-Path manifest entry. This allows swapping included libraries (e.g. jboss-javaee.jar) without having to re-package jbossall-client.jar. On the other hand, it requires that you have jbossall-client.jar together with the other client/*.jar libraries, so they can be found.

In order to access JBoss Messaging from a remote client, you need the following jars in the client’s CLASSPATH:

  • $JBOSS_HOME/client/jnp-client.jar
  • $JBOSS_HOME/client/jboss-javaee.jar
  • $JBOSS_HOME/client/jboss-messaging.jar
  • $JBOSS_HOME/client/jboss-remoting.jar
  • $JBOSS_HOME/client/jboss-serialization.jar
  • $JBOSS_HOME/client/javassist.jar
  • $JBOSS_HOME/client/jboss-aop-client.jar
  • $JBOSS_HOME/client/trove.jar
  • $JBOSS_HOME/client/log4j.jar
  • $JBOSS_HOME/client/jboss-logging-spi.jar
  • $JBOSS_HOME/client/jboss-logging-log4j.jar
  • $JBOSS_HOME/client/jboss-common-core.jar
  • $JBOSS_HOME/client/jboss-mdr.jar
  • $JBOSS_HOME/client/concurrent.jar

Aug 04

rsh hates nohup

rsh does not like nohup — rather than returning, as one would expect, it just hangs.

This has to do with stdin, stdout and stderr.

The short of it is….

By preference use ssh.

If you must use rsh, make sure that stdin, stdout, and stderr are all redirected properly.

Jun 09

Torquebox and Cygwin: Take I

Torquebox and Cygwin don’t work as nicely together as one might hope.

That said, here are a couple of findings:

  1. In order to deploy, you need to set the $JBOSS_HOME with the Windows path.  You can do this via export JBOSS_PATH=cygpath -w PATH_TO_JBOSS.
  2. Additionally, the JRUBY_HOME needs to be a windows path as well.   Otherwise you will see:

    org.jruby.exceptions.RaiseException: no such file to load — date

That’s it so far, more to follow as discovered….

Mar 02

Rails & JRuby in a Jar

For political reasons, I needed to ship a single jar file.  I didn’t want to have the overhead of a war & an appserver, so I set out to embed my rails app within a single jar file.  I needed to make some changes in order to get it to work.

This assumes Rails 2.3.5 and JRuby 1.4.0.

First, you need to create your rails app.   Freeze rails and any gems required.

Next download jruby-complete.  Once you’ve done so, unzip it.  I’m assuming you’ve unzipped it to complete.

Note: When you repackage the jar file, DO NOT use jar.  Use zip.  This is very important.  If you don’t, you’ll trash your jruby instance.

Copy your rails instance to complete/META-INF/jruby.home. I assume it will be complete/META-INF/jruby.home/rails.

Next create complete/META-INF/jruby.home/bin/server with the following content:

The RAILS_ROOT needs to be set in order to have the proper paths within the jar file.

Next, edit complete/META-INF/jruby.home/rails/vendor/rails/railties/lib/initializer.rb In the set_root_path method, edit it so it looks like this:

This is related to paths within the jar file.

Additionally, you need to change the initialize_logger method. (this might not be needed, see below). Change

to

The idea is to change it to something definitely outside the jar. The reason I did this, and didn’t change the config in environment.rb was that the changes in environment.rb were not getting picked up. I’ve since come to the belief that this is due to an issue with the Rack LogTailer detailed at https://rails.lighthouseapp.com/projects/8994/tickets/2350-logtailer-ignores-configlog_path. So complete/META-INF/jruby.home/rails/vendor/rails/railties/lib/rails/rack/log_tailer.rb needs to be edited. Edit the EnvironmentLog assignment to match the file we’d specified. (You may be able to substitute RAILS_DEFAULT_LOGGER but I have not tested that)

That’s pretty much it. Zip up your exploded jar from the complete directory: zip -r ../complete.zip *. Then you can run it via java -jar complete.jar -S server.

Enjoy!

Older posts «