Note that the mirrored segments are named $LVNAME_mimage_0 and
$LVNAME_mimage_1. The logical volume is 5GB, which is bigger than the
physical volumes, which are all 4GB. The 5GB segments are split across a
couple of physical volumes each. LVM will try to "do the right thing" and
prevent mimage segments from occupying the same physical devices.
There's a third part, the disklog, named $LVNAME_mlog. In this example, part
of mimage_0 and mlog are on the same physical volume, vdb.
When you create a mirror, you can also use "corelog" instead of disklog.
This keeps the mirror log in RAM instead of on a disk. This way, you
don't need to have a disklog segment.
The downside is that the mirror will have to be synchronized at every boot.
Since /dev/vde and /dev/vdf don't have any LV segments on them, we'll take
them out of vg_lvm_test with the vgreduce command, and put them into a new
volume group, vg_corelog_test.
$ sudo vgreduce vg_lvm_test /dev/vde /dev/vdf
Removed "/dev/vde" from volume group "vg_lvm_test"
Removed "/dev/vdf" from volume group "vg_lvm_test"
$ sudo vgcreate vg_corelog_test /dev/vde
Volume group "vg_corelog_test" successfully created
Let's make room by getting rid of our corelog thing.
$ sudo vgremove vg_corelog_test
Do you really want to remove volume group "vg_corelog_test" containing 1
logical volumes? [y/n]: y
Do you really want to remove active logical volume lv_corelog? [y/n]: y
Logical volume "lv_corelog" successfully removed
Volume group "vg_corelog_test" successfully removed
Now we'll add the old PVs back into the volume group, vg_lvm_test.
Let's say that we've got the sick feeling that /dev/vdb is about to die on
us. Let's move it over to the much newer /dev/vde.
$ sudo pvmove /dev/vdb /dev/vde
Skipping mirror LV lv_lvm_test
Skipping mirror log LV lv_lvm_test_mlog
Skipping mirror image LV lv_lvm_test_mimage_0
Skipping mirror image LV lv_lvm_test_mimage_1
All data on source PV skipped. It contains locked, hidden or non-top level
No data to move for vg_lvm_test
D'oh! It's our mirror. We'll have to break it first.
Once you're done, you should be able to login to your bbb server and test
Using with Drupal
For some reason, the BBB developers decided that apache wasn't good enough
for them, and went with nginx. Drupal will probably work with nginx, but
it's not trivial to setup. And that's what we want: Trivial.
First off, grab the Launchpad OpenID extensions from here:
bzr branch lp:drupal-launchpad/6.x
mv 6.x openid-launchpad # this needs to come after "openid", due to
tar fcz openid-launchpad.tar.gz openid-launchpad
Then unpack that so that you have
Go into Site Building -> Modules, and enable "Launchpad OpenID".
Now, go into Site Building -> Blocks, and remove the usual login screen, and
add the OpenID launchpad login button somewhere where the users can find it.
Set authorized users' permissions so that they can attend meetings, create
If everything is working at this point, you might want to upgrade your BBB
instance to 0.8. There are some improvements to the meeting layout, which is
especially good when you have a lot of people's video windows to move
First off, why Xterm? It's lightweight, it's easy, and it doesn't suffer from
trackpad scroll-madness like GNOME Terminal. Whenever I'm using
tmux under GNOME Terminal on my laptop, inadvertent scroll-wheel
events often cause me to send the wrong commands or write nonsense into IRC.
Unfortunately, Compiz and Xterm do not play well together. Due to how
Compiz calculates its textures, Xterm displays tend to get garbled.
Having had my fill of gnome-terminal, I finally got around to figuring out
the workaround for this. Just put the following in .Xdefaults:
! Workaround for compiz issues
Then xrdb ~/.Xdefaults && xterm, and you're back in old-school business.
In the various flavors of UNIX, you usually have an init or init-like process which wrangles all the sundry system services. Things like your mail server, your ftp server, etc. all have to be started by something.
In Ubuntu, we use Upstart since Lucid. It's a modern, event-based init daemon. Red Hat Enterprise Linux 5 uses the fine old SysVInit system. In RHEL 6, you also have Upstart, but it's being run in a SysVInit-compatibility mode. RHEL's cousin, Fedora, uses systemd.
Under Mac OS X Tiger and later, there's launchd. This not only ditches replaces init, but also cron and a couple of other traditional UNIX facilities.
The Ubuntu Linux distribution considered using launchd in 2006. However, launchd was rejected as an option because it was released under the Apple Public Source License – which at the time was described as an "inescapable licence problem".
In August 2006, Apple relicensed launchd under the Apache License, Version 2.0 in an effort to make adoption by other open source developers easier.
I'm not so convinced that licensing was the main reason that launchd was passed over. I wasn't around Ubuntu or Canonical at the time, and I've heard conflicting accounts from various Canonical employees on the matter.
With as much time and effort as we've put into Upstart in the meanwhile, it's turned out to be pretty good solution.
People complain about the upstart configuration file syntax. If you compare it aginast the property list approach, though, it's like writing hot buttered biscuits and bacon.
For comparison, here's a property list for postfix (stolen from Macworld):
Postfix isn't upstartified in Ubuntu (yet), but here's a comparable rsyslog upstart config:
# rsyslog - system logging daemon
# rsyslog is an enhanced multi-threaded replacement for the traditional
# syslog daemon, logging messages from applications
description "system logging daemon"
start on filesystem
stop on runlevel 
exec rsyslogd $RSYSLOGD_OPTIONS
Launchd might be stable, and it's what all the cool BSD kids are running these days, but it breaks one of the commandments: XML is not to be considered human-writable. It's a data format.
LXC is a container-type technology built into the Linux kernel, with
userspace available on Ubuntu, and probably most other modern distributions.
It is not virtualization as such, but it will let you set up development
environments very quickly.
It is good for setting up test web applications, running databases, checking
packaging issues, that sort of thing. You can also use it to place
a more secure, minimal environment around applications that communicate with
the outside world.
It is not good for deeper testing, or anything that requires virtualized
block or network devices, such as OpenVPN. If you've had openvz or
virtuozzo containers before, you'll be aware of these gotchas.
To test it out on Natty, just do the following:
apt-get install lxc
You'll need to setup on configuration file. The easiest way is to use
"macvlan" as your networking type.
This takes a bit of time, since it's downloading packages and setting them
up. This is cached however. The next time you setup a container using that
template, it goes lightning-fast, for values of lightning around 5 seconds.
Once this gets through bootstrapping the container, you'll be left with a
directory under /var/lib/lxc/lxc-natty (in this case). The actual
container's root filesystem is under the subdirectory rootfs/ . Use chroot
to prepare the environment a little before booting it up.
gpasswd -a eric sudo
apt-get install -y ubuntu-minimal
LXC uses cgroups to keep itself out of trouble. It will control
these using any mounted cgroup pseudo-filesystem it comes across. Just
mount one on /cgroups/ and it will find it with no issues:
mount cgroups -t cgroup /cgroups
mount |grep cgroup
At this point, you can boot your LXC instance up and have a look around.
Since LXC containers take over the current TTY, it's a good idea to create a
new screen session to house the different consoles. This lets you login
directly in case you break your network or SSH configuration.
(in a screen session)
lxc-start -n lxc-natty
Now you can login to the user account you supplied with the adduser command
earlier. Do a quick ifconfig, and you'll be able to login to your
instance with SSH.
Oneiric adds some features to lxc, and the syntax changes a bit for creating
LXC containers. To create a new container, do the following:
So, you've got yourself a fancy new Thinkpad x220 or T520, and you're trying
to get Ubuntu running on it.
Intel's Sandy Bridge platform isn't widely supported in the Linux world yet. If
you're running Ubuntu's current release, Natty, it will work without much
fuss. You might just need to update xorg's Intel graphic driver and libdrm
from a suitable PPA. To get the full 3D acceleration, you'll
also want to get betting familiar with the X220 Mesa PPA.
This will give you a fully-working desktop with accelerated 3D graphics and
the swhooshy Desktop Effects.
Mind you, you'll be running a configuration that isn't supported by
Canonical, and isn't an official distribution any more. The best way to
stay supported is to buy supported hardware, like a non-Sandy Bridge
laptop or one of the many that are based on AMD CPUs.
When you enable the Expose-like feature under Unity
(Default Super+W), you'll see all of your windows tiled about the screen,
allowing you to pick them easily.
The more windows you have open, however, the smaller the preview windows
become. The fact that almost all applications have a fairly consistent look
and feel might be good for the general desktop experience, but it's not very
good for picking out which windows belong to which application.
One way to get around this is to overlay the application icon on the window
itself. Unfortunately, the Unity itself doesn't have a configuration UI for
this, so you'll need to do it at the Compiz level. To do this, install the
apt-get install compizconfig-settings-manager
Now run the command, either using the launcher or typing "ccsm" into the
terminal. Go the conrol panel called "Scale" under "Window Management", and
set the "Overlay Icon" option. I personally prefer "Emblem", as it makes it
easier to pick out the icon.