Friday, June 28, 2013

Spanish or Bust: Attaining fluency in 90 days or else ...

"How do you have an adventure?"

"You take a stupid idea, and follow through ..."

Perhaps, I've truly lost my mind for once, but I've decided to embark on a personal project to try and achieve some fluency in Spanish in just 90 days, and posting it to the internet to try and keep me honest and on track towards doing so. I will post future updates as I go, and would welcome help from anyone who would be interested in helping!

The cost of failure: being forced to praise Wndows 8 in a future video, and run it for one week on my laptop.



(I apologize for the quality, but I finally managed to find the nerve to record myself doing this, and just uploaded before I lost it again)


Friday, June 22, 2012

Announcement of Calxeda Highbank Images for Quantal

Hello all,

As many of you are aware, Canonical, in coordination with Calxeda and others have been working to bring Ubuntu to this new class of high-performance cluster yet low power-consumption computers built around ARM processors. Many of you who were in attendance at UDS in Oakland may remember seeing Calxeda's talks and demonstration live, and the exciting news that this represents. The full presentation is available here.

In line of this work, I'm am extremely pleased to announce that the initial images for the Calxeda Highbank platform are now available for download, with installation instructions available here. Please remember that Quantal is still in alpha development, and is not currently recommended for use in a production environment. As development of 12.10 continues, we will continue to refine these images, and our tools to fully embrace MAAS on ARM, and make 12.10 to be our best release yet.


As an additional note, Highbank support for Ubuntu 12.04 LTS will be released as part of the 12.04.1 update in mid-August and will join our support for ArmadaXP from Marvell, which was released as part of 12.04.

---
Michael Casadevall
ARM Server Tech Lead
Professional Engineering and Services, Canonical
michael.casadevall@canonical.com

Sunday, November 20, 2011

Possible GLX Bug in Ubuntu; feedback needed (affects Intel video cards 2D/3D acceleration)

So I've been recently screwing around with VirtualBox on a personal project, and I ran into an issue with not being able to enable 2D/3D acceleration on Ubuntu 11.10. After quite a bit of debugging and forum searching, the problem was that the NVIDIA GLX driver was being loaded instead of the standard MESA one, preventing any video acceleration from properly working on my Intel based video card.

I just recently reinstalled Kubuntu on this laptop, and since its a fairly stock install at the moment, I suspect that this is a general (K)ubuntu bug, and not something related to me screwing around with my system. In addition, since I switched to using Kubuntu full time, this is the first time I've seen transparency and other desktop effects, and system performance has improved dramatically. While I can't say for certian, I suspect that my system was also affected on its previous install. Part of the issue may be related to what packages are seeded per flavour, so this bug may only affect those who installed Kubuntu over say Xubuntu or Ubuntu; without more information, its impossible to say.

This is where you can help; if you are running any flavor of Ubuntu with an Intel based video card, you might be affected by this too.

Here's how to check; open a terminal, and type:

mcasadevall@daybreak:/var/log$ cat /var/log/Xorg.0.log

then find the section where the glx module is loaded. It looks something like this:

[236901.570] (II) LoadModule: "glx"
[236901.571] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so
[236901.578] (II) Module glx: vendor="X.Org Foundation"
[236901.578] compiled for 1.10.4, module version = 1.0.0
[236901.578] ABI class: X.Org Server Extension, version 5.0

(this is on a machine where the Intel acceleration is properly working).

If it says 'ATi' or 'NVIDIA', you've run into the same issue I have. So dear readers, I ask that if you've had any issue with graphic performance, gaming, or simple UI lag and have an Intel video card, please post a comment with you video card, what flavor of Ubuntu you have installed, and the glx section of Xorg.0.log. If I get a few reports that confirm this, I'll file a proper bug in Launchpad, and then work to get this fixed.

Wednesday, November 16, 2011

Touch-friend apps in Ubuntu/Debian?

I'm working on a personal pet project and wanted to get some feedback on the best apps to use in a touch only environment. I know that there are a few people who use Debian or Ubuntu on a tablet, and I was hoping to get suggestions on the best desktop environment and apps available. Please leave some comments with suggestions, and with a little luck, I'll have something to demo on this blog in a few weeks.

Tuesday, November 15, 2011

Secure Boot - its here and been here for quite awhile ...

There's been a lot of noise with Microsoft requiring Secure Boot for Windows 8 OEMs. For those of you unfamiliar with it, Secure Boot requires that the boot chain is signed, and this 'feature' must be enabled by default. Although I have been unable to find specific details, it appears that the chain of trust needs to extend from BIOS/UEFI all the way down to the kernel. Obviously, requiring a signed boot chain makes using FOSS platforms like Ubuntu or Debian an impossibility short of having the UEFI Platform Key and resigning the entire chain.

Steven Sinofsky's MSDN blog has a fairly good overview how it works. Canonical and Red Hat also have a good white paper on why secure boot is a serious problem for Linux distributions. Even if secure boot itself can be disabled, it *greatly* rises the bar for general end-users to successfully install Ubuntu on their machine. In addition, it is the responsibility of OEM and BIOS manufacturers to provide the option to disable it.

There already has been a long history of OEMs removing BIOS options or introducing DRM; for instance, locking out VTx on Sony laptops, or restricting laptops to only accept 'branded' wifi and 3G cards. Given this track record, can OEMs realistically be trusted to have this option available?

What most people don't realize is that secure boot itself is not a new concept; its simply part of the Trusted Computing initiative, and has been implemented on embedded platforms for many years. If you own any iPhone, or one of the vast majority of Android devices, you are using a device that either has the secure boot feature, or something very close. This especially painful in Android as Google's security system restricts users to a *very* limited shell and subset of utilities which can be used on non-rooted devices. WebOS, Maemo, and to my knowledge Meego give the end-user full unrestricted access to the boot chain and you can swap kernels and even the entire OS out if one was so motivated.

Although it is still an ongoing problem, several vendors such as HTC, Samsung, Motorola (kinda), and even Sony. In the Android community, having unlockable bootloaders has been a welcome middle ground in the traditionally restrictive and locked-down world of cellular devices.

While some may argue that such locks are necessary to protect consumers, it is perfectly feasible to create devices with unlocked bootloaders that are still secure. The Nook Color's is an Android powered eReader. It's stock firmware doesn't allow sideloaded applications or even access to a user shell via adb, but the BootROM on the device attempts to boot from the microSD card before eMMC, making it possible for enterprising users to easily modify the underlying OS (as well as making it physically impossible to brick the device due to a bad flash). Barnes and Nobles even sells a book on rooting the Nook Color; it was right next to the devices on display at the time. In addition, they've continued the tradition of easily modifiable devices with both the Nook Simple Touch, and the Nook Tablet.

One of the most impressive attributes is the possibility of running Ubuntu on it. The absolute poster child for this is the Toshiba AC100. For those of you unfamiliar, it ultralight netbook that shipped with Android 2.1/2.2 with easy access to the built in flash via the mini-USB port on the side of the device. Due to the valiant efforts of the AC100 community, Ubuntu was ported to this device, and became a supported platform with images available on cdimages.ubuntu.com. If you were an attendee at UDS, you likely saw several AC100s all running Ubuntu.

This brings me to the point that motivated me to write this post in the first place. One of the most impressive tablets I've seen to date is the ASUS eeePad Transformer, an Android tablet with fully dockable keyboard. I have one of these devices, and its one of the most impressive and usable Android tablets I own. Sadly, such a powerful device was hobbled from its true potential due to ASUS's decision to ship the device with a locked and encrypted bootloader. Surprisingly, the Secure Boot Key (SBK) was acquired and released to the wild, making it possible to reflash the device. Sadly, even with the SBK, the device's bootloader is still extremely hobbled compared to the AC100 making flashing a slow and difficult process.

In response, ASUS refreshed the eeePad's hardware to the new B70 SKU, which has a new Secure Boot. Despite this, a root exploit was recently found to allow people to circumvent these restrictions and install customs ROMs. It is however only a manor of time before ASUS responds and releases a new update that fixes this bug.

Steven Barker (lilstevie) on xda-developers successfully created a port of Ubuntu to the Transformer. Currently, installing Ubuntu on the Transformer requires nvflash access, so its not possible to use his image on the newly liberated B70 devices. I am certain that a new method of installing via an update.zip will be developed for those of us with hobbled devices.

It is a showcase of what is possible when you have open hardware, and also proves one indisputable point: any 'trusted boot' or DRM scheme can and will be defeated; at best you piss off your userbase, and at worst, you force users to exploit bugs to gain control of their device. As it is impossible to reflash these devices from the bootloader, a failed kernel flash WILL brick these devices, increasing warranty and support costs as users try to return their now broken devices.

In closing, while there have been some victories in ongoing war of open hardware vs. trusted comptuing, the road ahead still remains very murky. Victories in the mobile market have shown that there is a market for open devices. Google's own Nexus One was sold as a developers phone and as a way to encourage manufacturers to raise the bar. It sold well enough to recoup its development costs.. While there is no official statements, the Nokia N900 is suspected to have broken all sales expectations, backed up by the fact that Angry Birds sold extremely well on the Ovi Store for the N900.

From the article in question:
What reaction have you had in terms of sales and customer feedback?

Angry Birds had already been launched on App Store before it came out on Ovi Store, and it had a great review average from iPhone reviewers and users alike, so we expected a good reception from N900 users as well.

Even so, we were quite surprised by just how the N900 community immediately took the game to heart. The game obviously made many people very happy, and that is really the greatest achievement that anyone who creates entertainment for a living can hope for. Well, maybe the greatest achievement is huge bundles of cash, but making people happy comes a close second.

In the first week that Angry Birds has been on the Ovi Store, it has been downloaded almost as many times as the iPhone version in six weeks. Given that most N900 users have not even used Ovi Store yet, we are confident that there will be many more downloads in the months to come, and are sure that the N900 version will be very profitable.

That being said, with Microsoft pushing secure boot and trusted computing down everyone's throats with Windows 8, it is hard to say what the future might hold for those of us who want to own our devices.

Friday, July 1, 2011

Pandaboard Netboot Images Now Available

As I mentioned in my previous blog post, OMAP4 netboot images were available, but non-functional. I'm pleased to announce that these bugs have now been resolved and it is possible to have a functional install on OMAP4. This also has the added advantage of allowing one special partitioning layouts such as RAID, LVM, or simply having a non-SD based root device. The images are available here: http://ports.ubuntu.com/ubuntu-ports/dists/oneiric/main/installer-armel/current/images/omap4/netboot/

To use, simply dd boot.img-serial or boot.img-fb to an SD card, pop it in, and run, and the installer will pop up.

There is still a known bug that partman will not properly create the necessary boot partition. During the partitioning step, you must select manual partitioning, then create a 72 MiB FAT32 partition, with no mount point, and the Bootable flag must be set to 'on'. This partition must be the first partition on the device. flash-kernel-installer will be able to find the partition on its own.

Thursday, June 30, 2011

On porting the installer (Part 1)...

So as Alpha 2 approaches, I find myself working towards porting the alternate installer/d-i to the pandaboard to support the netboot installer. There's not a lot of documentation that describes the internals of d-i, nor what bits are platform specific.

This is especially true when working towards creating a new subarchitecture since lots of little places have to be touched, kernels usually have to be tweaked, and all other sorts of odds and ins. This post isn't a comprehensive guide to what's necessary, but just little tidbits of what I did, just some random odds and ends.

The first step of any enablement is to have something you can run and boot. The netboot images, as well as the alternate kernel and ramdisk are built out of the debian-installer package. In the debian-installer package, several config files for driving the process are located in build/config/$arch/$subarch. For omap4, we have the following files:

boot/arm/generate-partitioned-filesystem
build/config/armel.cfg
build/config/armel/omap4.cfg
build/config/armel/omap4/cdrom.cfg
build/config/armel/omap4/netboot.cfg

boot/arm/generate-partitioned-filesystem is a shell script that takes a VFAT blob, and spits out a proper MBR and partition table.

build/config/armel.cfg simply is a list subarchitectures to build, and some sane-ish kernel defaults for armel.

build/config/armel/omap4.cfg is also a simple config file which specifies the type of images we're building, and the kernel to use in d-i. This file looks like this:

MEDIUM_SUPPORTED = netboot cdrom

# The version of the kernel to use.
KERNELVERSION := 2.6.38-1309-omap4
# we use non-versioned filenames in the omap kernel udeb
KERNELNAME = vmlinuz
VERSIONED_SYSTEM_MAP =

As a point of clarification, 'cdrom' is a bit of a misdemeanor; it refers to the alternate installer kernel and ramdisk used by alternate images, and not the type of media. Other types of images exist such as 'floppy' and 'hd-install', but these are specialized images, and out of scope for this blog post.

Each file in build/config/armel/omap4/* is a makefile thats called in turn for each image that created. The most interesting of this is the netboot.cfg

MEDIA_TYPE = netboot image
SUBARCH = omap4
TARGET = $(TEMP_INITRD) $(TEMP_KERNEL) omap4
EXTRANAME = $(MEDIUM)
INITRD_FS = initramfs

MANIFEST-INITRD = "netboot initrd"
MANIFEST-KERNEL = "kernel image to netboot"
INSTALL_PATH = $(SOME_DEST)/$(EXTRANAME)

omap4:
 # Make sure our build envrionment is clean
 rm -rf $(INSTALL_PATH)
 mkdir -p $(INSTALL_PATH)

 # Generate uImage/uInitrd
 mkimage -A arm -O linux -T kernel -C none -a 0x80008000 -e 0x80008000 -n "Ubuntu kernel" -d $(TEMP_KERNEL) $(INSTALL_PATH)/uImage
 mkimage -A arm -O linux -T ramdisk -C none -a 0x0 -e 0x0 -n "debian-installer ramdisk" -d $(TEMP_INITRD) $(INSTALL_PATH)/uInitrd

 # Generate boot.scrs
 mkimage -A arm -T script -C none -n "Ubuntu boot script (serial)" -d boot/arm/boot.script-omap4-serial $(INSTALL_PATH)/boot.scr-serial
 mkimage -A arm -T script -C none -n "Ubuntu boot script (framebuffer)" -d boot/arm/boot.script-omap4-fb $(INSTALL_PATH)/boot.scr-fb

 # Create DD'able filesystems
 mkdosfs -C $(INSTALL_PATH)/boot.img-fat-serial 10240
 mcopy -i $(INSTALL_PATH)/boot.img-fat-serial $(INSTALL_PATH)/uImage ::uImage
 mcopy -i $(INSTALL_PATH)/boot.img-fat-serial $(INSTALL_PATH)/uInitrd ::uInitrd
 mcopy -i $(INSTALL_PATH)/boot.img-fat-serial /usr/lib/x-loader/omap4430panda/MLO ::MLO
 mcopy -i $(INSTALL_PATH)/boot.img-fat-serial /usr/lib/u-boot/omap4_panda/u-boot.bin ::u-boot.bin
 cp $(INSTALL_PATH)/boot.img-fat-serial $(INSTALL_PATH)/boot.img-fat-fb
 mcopy -i $(INSTALL_PATH)/boot.img-fat-serial $(INSTALL_PATH)/boot.scr-serial ::boot.scr
 mcopy -i $(INSTALL_PATH)/boot.img-fat-fb $(INSTALL_PATH)/boot.scr-fb ::boot.scr
 boot/arm/generate-partitioned-filesystem $(INSTALL_PATH)/boot.img-fat-fb $(INSTALL_PATH)/boot.img-fb
 boot/arm/generate-partitioned-filesystem $(INSTALL_PATH)/boot.img-fat-serial $(INSTALL_PATH)/boot.img-serial

 # Generate manifests
 update-manifest $(INSTALL_PATH)/uImage "Linux kernel for OMAP Boards"
 update-manifest $(INSTALL_PATH)/uInitrd "initrd for OMAP Boards"
 update-manifest $(INSTALL_PATH)/boot.scr-fb "Boot script for booting OMAP netinstall initrd and kernel from SD card. Uses framebuffer display"
 update-manifest $(INSTALL_PATH)/boot.scr-serial "Boot script for booting OMAP netinstall initrd and kernel from SD card. Uses serial output"
 update-manifest $(INSTALL_PATH)/boot.img-serial "Boot image for booting OMAP netinstall. Uses serial output"
 update-manifest $(INSTALL_PATH)/boot.img-fb "Boot image for booting OMAP netinstall. Uses framebuffer output"

The vast majority of this is fairly straightforward. TARGET represents the targets called by make. There are tasks for creating a vmlinuz and initrd that must be included. The omap4 target then handles specialized handling for the omap4/netboot image.

omap4 requires a VFAT boot partition on the SD card with a proper filesystem and MBR. The contents of the filesystem are straightforward:

MLO - also known as x-loader, a first stage bootloader
u-boot.bin - u-boot binary, second stage bootloader, used to book the kernel
uImage - linux kernel with special uboot header (created with mkimage)
uInitrd - d-i ramdisk with special uboot header
boot.scr - special boot script for u-boot for commands to execute at startup.

MLO and u-boot.bin are copied in from x-loader-omap4-panda and u-boot-linaro-omap4-panda which are listed as build-deps in the control file for d-i. boot.scr is generated from a plain-text file:

fatload mmc 0:1 0x80000000 uImage
fatload mmc 0:1 0x81600000 uInitrd
setenv bootargs vram=32M mem=456M@0x80000000 mem=512M@0xA0000000 fixrtc quiet debian-installer/framebuffer=false console=ttyO2,115200n8
bootm 0x80000000 0x81600000

These are u-boot commands that simply load the uImage/uInitrd into RAM, set the command line, and then boot into it.

When porting the installer, it is mostly a task of putting your subarchitecture name in the right places, then adding the necessary logic in places to spit out an image that boots. This provides a sane base to start working on porting other bits of the installer. When d-i is uploaded to Launchpad, these files end up in http://ports.ubuntu.com/ubuntu-ports/dists/oneiric/main/installer-armel/current/images/

My next blog post will go a bit into udebs, and understanding how d-i does architecture detection, and introducing flash-kernel.