Checkout the Sumo branch for documentation.
Except for your own stuff, you should always be able to recover to the factory image, a self built factory image.
And except for the contents of the factory partition. This partition gets mounted on /factory.
The factory partition /dev/disk/by-partlabel/factory is created during manufacturing and contains a unique bluetooth_address and the Intel Edison serial_number. The partition is never removed during
flashing of images. Not even when using flashall --recovery
.
In case for some reason the partition is lost, post-install.sh will create dummy files for both, but the bluetooth address will likely not work and the serial number should ideally be equal to the serial number on the Intel Edison label.
The serial_number can be reclaimed from the label on the Intel Edison, but the bluetooth_address needs to be recovered from backup. You might want go to /factory and make a copy of the files there before you proceed.
Yocto Kirkstone will build on Ubuntu Kinetic (23.10).
Install the required build environment:
sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential chrpath socat cpio python python3 libsdl1.2-dev xterm
sudo apt-get install python-is-python2 p7zip-full btrfs-progs lz4
ssh
access) that you will likely need to build Yocto.Yocto builds almost everything it needs itself. But not everything. So if you upgrade your distribution to a newer version you may find that things are temporarily broken. If you want to prevent that, you might want to build Yocto from a container with LTS (Long Term Support). The nice thing about a container is that it’s a lot smaller than a virtual machine and will build stuff a lot faster as it doesn’t virtualize the kernel or the file system.
First install LXD:
apt install lxd lxd-client
Init the LXD system using the defaults:
ferry@kalamata:~$ sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, ceph, dir, lvm, zfs) [default=btrfs]:
Would you like to create a new btrfs subvolume under /var/snap/lxd/common/lxd? (yes/no) [default=yes]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]:
What should the new bridge be called? [default=lxdbr0]:
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]:
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:
Get and launch your first container (you might want to select a more appropriate name instead of first
):
lxc launch ubuntu:22.04 first
ferry@kalamata:~$ lxc list
+-------+---------+---------------------+-----------------------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+-------+---------+---------------------+-----------------------+------------+-----------+
| first | RUNNING | 10.51.52.227 (eth0) | fd42:....:eed2 (eth0) | PERSISTENT | 0 |
+-------+---------+---------------------+-----------------------+------------+-----------+
Get a shell on the container. create a user with sudo rights and push your ssh
public keys:
ferry@kalamata:~$ lxc exec first -- /bin/bash
root@first:~# adduser ferry
root@first:~# adduser ferry admin
root@first:~# login ferry
Password:
ferry@first:~$ mkdir .ssh
ferry@first:~$ logout
root@first:~# exit
ferry@kalamata:~$ lxc file push .ssh/id_rsa.pub first/home/ferry/.ssh/id_rsa.pub
ferry@kalamata:~$ lxc file push .ssh/authorized_keys first/home/ferry/.ssh/authorized_keys
Now you can ssh
into the container and install the required packages to build Yocto and clone meta-intel-edison
:
ferry@kalamata:~$ ssh 10.51.52.227
ferry@first:~$ sudo apt-get update
ferry@first:~$ sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential chrpath socat cpio python python3 python3-pip python3-pexpect xz-utils debianutils iputils-ping libsdl1.2-dev xterm
To get access to the files in the container use scp
, sshfs
or if you are using KDE’s Dolphin
the fish
kioslave.
After a sucessful build process, there are a number of artifacts which must be transferred to the Edison via the Direct Firmware Update (DFU) protocol. By default, USB devices which expose a DFU interface are not directly accessible to non-root users. This can be corrected by a udev rule which identifies a particular device (in this case an Edison) and assigns permissions more appropriate for user access.
The following command will create a file and populate it with such a rule. The tee command requires sudo, as the file it creates is in a directory owned by root. The rule specifically identifies the ID numbers associated with an Edison in DFU mode, and makes the device read/writeable by members of the plugdev
group.
echo SUBSYSTEM==\"usb\", ATTRS{idVendor}==\"8087\", ATTRS{idProduct}==\"0a99\", MODE=\"0664\", GROUP=\"plugdev\" | sudo tee /etc/udev/rules.d/46-edison-dfu.rules
It may be necessary to reload the udev system for the rule to take effect.
sudo udevadm control --reload-rules && udevadm trigger
You will also need to verify that your user is a member of the plugdev
group.
groups | grep plugdev
If nothing is returned, then you will need to add your user to the plugdev
group.
sudo usermod -aG plugdev ${USER}
Log out and log back in for changes to take effect.
I have tried building using Ubuntu for Windows, with no succes. Appearently Windows translation layer does not fully support all properties of a linux file system.
If you want to get started quickly, you are probably better off installing a virtual machine (like Virtualbox) with a suitable version of Ubuntu (I don’t want to start a flame war, but Kubuntu is really nice). Keep in mind that bitbake will produce 60GB - 100GB of files and a virtual machine will be substantially slower than building native. And native can take 2.5 (ssd) - 6 (hdd) hours when building all from scratch.
CROPS uses Docker Containers, on Windows this is done by running a linux kernel in a Virtual Machine (VM) then running containers in linux. In the end it is still a VM, which associated slowness, but without the desktop that Virtualbox provides.
Go to the Docker Installation website and download Docker for Windows (Stable). Docker is a software container platform that you need to install on the host development machine.
Set Up the Containers to Use the Yocto Project. Go to CROPS on github and follow the directions for your particular development host (i.e. Linux, Mac, or Windows).
Once you complete the setup instructions for your machine, you have the Poky, Extensible SDK, and Toaster containers available.
© 2018 Ferry Toth