Disable User Switching on Ubuntu 14.04

Recently, one of our home computers running Ubuntu 14.04 has been experiencing what seems to be a bug that is triggered when multiple users log in concurrently using the “fast user switching” feature. The symptom of the bug is a black screen after trying to switch back to an existing user session. Searching the Web seems to indicate that this is a common problem but it is not clear to me where (what component) the bug lies. This Ubuntu bug seems to match the problem but it has been attributed to a bug in the Xorg Intel graphics driver: .

In any case, my goal with this post is not to look into the bug itself but into a simple way to try to prevent it — because the bug is apparently triggered by switching users I decided to look into what it takes to disable the fast user switching feature:

In modern Ubuntu/GNOME (GNOME >= 3.0), fast user switching is a system-wide option that is controlled by a pair of GSettings options (they used to be GConf options but moved to GSettings when the GNOME project deprecated GConf in favor of GSettings):

shell$ gsettings list-recursively | grep user-switch
org.gnome.desktop.lockdown disable-user-switching false
org.gnome.desktop.screensaver user-switch-enabled true

These options are part of XML schema files stored under the /usr/share/glib-2.0/schemas/ directory:

shell$ grep user-switch /usr/share/glib-2.0/schemas/*xml
/usr/share/glib-2.0/schemas/org.gnome.desktop.lockdown.gschema.xml:    <key type="b" name="disable-user-switching">
/usr/share/glib-2.0/schemas/org.gnome.desktop.screensaver.gschema.xml:    <key type="b" name="user-switch-enabled">

To change the default, which is user switching enabled, one has to use the correct method to override GSettings defaults, which is to create a .gschema.override file, and re-compile the schemas.

To override the above two GSettings options I created /usr/share/glib-2.0/schemas/org.gnome.desktop.lockdown.gschema.override and /usr/share/glib-2.0/schemas/org.gnome.desktop.screensaver.gschema.override with the following contents:

shell$ cat /usr/share/glib-2.0/schemas/org.gnome.desktop.lockdown.gschema.override
[org.gnome.desktop.lockdown]
disable-user-switching=true

and

shell$ cat /usr/share/glib-2.0/schemas/org.gnome.desktop.screensaver.gschema.override
[org.gnome.desktop.screensaver]
user-switch-enabled=false

Then the schemas need to be re-compiled:

shell$ sudo glib-compile-schemas /usr/share/glib-2.0/schemas/

This refreshes the file /usr/share/glib-2.0/schemas/gschemas.compiled so the overridden GSettings that we configured are used. Logging out and restarting the lightdm service (sudo service lightdm restart) is required so the new configuration takes effect.

There is a wrinkle in all this, however — the session indicator (provided by the package indicator-session) in Ubuntu 14.04 (and probably some previous releases) does not take into consideration the above GSettings options so user switching is always available regardless of configuration. This is tracked in Ubuntu bug #1325353, which is fixed in Ubuntu 14.10. I did not want to upgrade my machine from 14.04 to 14.10 so I just downloaded the latest version of the indicator-session package (from what will become Ubuntu 15.04) and installed that (there were no package dependency problems).

I thought I would write about my experience disabling user switching because it requires overriding GSettings preferences via .gschema.override files and gschema recompiles, which is something I keep forgetting how to do.

As to whether disabling user switching fixes the black screen problem… only time will tell — users cannot use the computer while someone else is logged in. Instead, the user logged in must log out and then the new user has to log in via the LightDM greeter.

Note: The following  askubuntu.com question was very useful to figure out how to change GSettings defaults:

http://askubuntu.com/questions/65900/how-can-i-change-default-settings-for-new-users

Add More Disk Space To Existing Ubuntu 12.04 Server Installation

Recently we found ourselves regretting a decision we made when we installed Ubuntu Server 12.04LTS on a VMware ESXi virtual machine (VM) — when we created the VM, we allocated a very small hard disk (8 GBytes) thinking that for the specific purpose of that machine we were not going to need a bigger hard drive. It turns out that we were very wrong, which became apparent when we tried to download a 4+ GByte file and the download failed because of lack of free space.

Because we did not want to create a new VM and copy the existing installation over we decided to just increase the size of the existing disk and the partitions/physical volumes/logical volumes inside it. Turns out this was not as easy as we thought so we decided to document things here. We basically followed the instructions in this blog post (credit where credit is due):

http://www.joomlaworks.net/blog/item/168-resizing-the-disk-space-on-ubuntu-server-vms

Increasing the size of the actual hard disk is very easy because we are working with a virtual machine — all it takes is to is to edit the virtual machine settings, look for the virtual disk, and increase the size (we have done this for both VMware and VirtualBox virtual machines; the process is similar). The difficult part comes after increasing the size of the virtual disk — the existing stuff inside the hard disk needs to be resized as well.

Several links we found online talked about booting the machine with an Ubuntu LiveCD and then using GParted (a GUI-based partition utility) to resize the existing partition to make use of the new, extra unallocated space. We did not have any luck with that.

Instead, what we found worked for us was to just make use of the Linux Logical Volume Manager (LVM), which is what was set up by the Ubuntu installer when we first installed Ubuntu on this machine. Basically, the process looks like this:

  1. Create a new logical partition using the new unallocated space.
  2. Create a new physical volume that uses the new partition.
  3. Extend the existing Volume Group that was created during Ubuntu installation to make use of the new physical volume.
  4. Extend the Logical Volume to make use of the new available space.
  5. Resize the filesystem to make use of the new available space.

We will go in detail on each of these steps.

Create New Logical Partition

We used cfdisk. Basically, select the unallocated (free) space, then select the “New” option, then “Logical”, and allocate all the available space (shown by default). Select “Type” and enter “8e” (for Linux LVM). Finally select “Write”, confirm by typing “yes” followed by <Enter>, then “Quit”, and reboot the machine.

This is how the partition table looked before using cfdisk:

root@holyland:~# fdisk -l /dev/sda

Disk /dev/sda: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0009f8ab

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048      499711      248832   83  Linux
/dev/sda2          501758    16775167     8136705    5  Extended
/dev/sda5          501760    16775167     8136704   8e  Linux LVM

And this is how the partition table looks like after using cfdisk to create the new partition on the free space and rebooting the machine. Note the new partition /dev/sda6:

root@holyland:~# fdisk -l /dev/sda

Disk /dev/sda: 268.4 GB, 268435456000 bytes
255 heads, 63 sectors/track, 32635 cylinders, total 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0009f8ab

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048      499711      248832   83  Linux
/dev/sda2          501758   524287999   261893121    5  Extended
/dev/sda5          501760    16775167     8136704   8e  Linux LVM
/dev/sda6        16775231   524287999   253756384+  8e  Linux LVM

Create New Physical Volume

To create a new physical volume in the new partition we used pvcreate:

root@holyland:~# pvcreate /dev/sda6
  Physical volume "/dev/sda6" successfully created

Extend Existing Volume Group

First, we need the name of the volume group. This can be obtained by running the vgdisplay command:

root@holyland:~# vgdisplay
  --- Volume group ---
  VG Name               holyland
  System ID            
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               7.76 GiB
  PE Size               4.00 MiB
  Total PE              1986
  Alloc PE / Size       1978 / 7.73 GiB
  Free  PE / Size       8 / 32.00 MiB
  VG UUID               H5FfHu-lsxo-G9Od-k8S5-lnCQ-PuCq-059Xco

The output above (under “VG Name”) shows that the name of the volume group is “holyland” (the name of our server).

Next we use the vgextend command to extend the existing volume group to the new physical volume:

root@holyland:~# vgextend holyland /dev/sda6
  Volume group "holyland" successfully extended

Extend The Logical Volume

First, we need to obtain the name of the logical volume that we want to extend (this is because a volume group can contain several logical volumes). To obtain the name of the logical volume with use the lvdisplay command:

root@holyland:~# lvdisplay
  --- Logical volume ---
  LV Name                /dev/holyland/root
  VG Name                holyland
  LV UUID                vNPQV1-s5OU-GYVP-FCMp-p3UY-7fhe-FXNsK8
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                7.23 GiB
  Current LE             1851
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0

  --- Logical volume ---
  LV Name                /dev/holyland/swap_1
  VG Name                holyland
  LV UUID                YxBsBD-we0S-ELCV-yCsz-RAdL-Qzcj-y4cpH9
  LV Write Access        read/write
  LV Status              available
  # open                 2
  LV Size                508.00 MiB
  Current LE             127
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:1

We are interested in extending the logical volume used for the root filesystem, which corresponds to the logical volume called “/dev/holyland/root” in the output above (under “LV Name”).

To extend the logical volume we are interested in, we use the lvextend command as follows:

root@holyland:~# lvextend -l 100%FREE /dev/holyland/root
  Extending logical volume root to 242.03 GiB
  Logical volume root successfully resized

Note the use of of the “-l 100%FREE” argument, which indicates that we want to extend the logical volume to use 100% of the free space.

Extend Filesystem

Finally, to extend the file system we use the resize2fs command:

root@holyland:~# resize2fs /dev/holyland/root
resize2fs 1.42 (29-Nov-2011)
Filesystem at /dev/holyland/root is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 16
The filesystem on /dev/holyland/root is now 63447040 blocks long.

To verify, we use the df command:

root@holyland:~# df -hT
Filesystem                Type      Size  Used Avail Use% Mounted on
/dev/mapper/holyland-root ext4      239G  3.8G  225G   2% /
udev                      devtmpfs  237M  4.0K  237M   1% /dev
tmpfs                     tmpfs      99M  260K   98M   1% /run
none                      tmpfs     5.0M     0  5.0M   0% /run/lock
none                      tmpfs     246M     0  246M   0% /run/shm
/dev/sda1                 ext2      228M   75M  141M  35% /boot

Now the root filesystem has a healthy size!