Installing Live on a system with a spanking new HD?
#1
Until I get my 360+G hdd I have an old 13G I'm going to temporarily use just to fart around with things until I'm comfortable. At the same time I recently bought a seagate 1TB drive that I plan on storing all my DVD's on.

The motherboard I have is an Nvidia board. The 13G is IDE and is the boot disk that I installed Live onto. The 1T is a SATA. At some point I'm going to add a second 1T drive for additional space and/or as a mirror, so will at some point turn on the on-board RAID features.

Having said all that, I've got Live on the 13G drive and the 1T is factory default. When I boot up from the 13G, I see the XBMC splash but then it sits with a cursor. Going to one of the TTY's I see scrolling a message that it is trying to mount /dev/sda1 to /container but there is no such file or directory.

I presume that it is trying to auto-mount the 1T drive and obviously can't because it hasn't been partitioned or formatted.

Is there a way around this? I would expect a failed mount to just abort, not keep retrying. Can I somehow get to a command prompt so I *can* format the drive, or do I need to boot from the flash drive and format/partition it from there? I have never had this kind of problem with any other unix system I've run (Freebsd), and I've had occasion to add additional capacity to them. This may just be a Linux- or Live-specific thing.

How to people generally add new storage capacity to their linux/Live systems? I suspect I'll come across a similar issue if/when I add the second 1T drive, though the raid utility may actually take care of that when I setup the mirror.

Any help or ideas are appreciated.
Reply
#2
The SATA took the precedence over the previous one; something to be taken into account in next releases.
For the moment boot with live CD or USB, mount the boot partition on the IDE drive, open the syslinux.cfg and add root=/dev/hda1 to the kernel parameters. It should work, though I have never tested this workaround.
Reply
#3
I have implemented some changes for the next release, pls try and repeat the procedure once the new image will be out.
Cheers
Reply
#4
l.capriotti Wrote:I have implemented some changes for the next release, pls try and repeat the procedure once the new image will be out.
Cheers

Any ETA? Big Grin

I'll try the method you mentioned in the first post when I get home, thanks!
Reply
#5
As mentioned, I have added root=/dev/sda1 in my syslinux.cfg file. This is the device that the boot fs is on, on my IDE hdd. My SATA drive gets mounted as /dev/sdb1.

One thing that strikes me as odd is that when I disconnect the SATA drive the system boots fine, but the SATA is where all my movies are. Another oddity is that my /etc/fstab is essentially empty. Even after the install it was like that. The only active entry in that file is for /dev/sdb1 to mount it. Is there another file somewhere that specifies where the boot drive mounts its partitions?
Reply
#6
And again on further in-depth troubleshooting, it appears that /dev/sda1 contains what I gather is supposed to get mounted to /container which appears to be a RAM filesystem. And yet at the same time, something (still can't find what) is telling the system to mount /dev/sda1 to /mount/sda1. It appears that it getting mounted to /mount/sda1 is happening before it tries mounting to /container, and by that time the damage is done.

The confusing part for me is that this conflict ONLY happens when I plug in my SATA drive, which when reviewing my dmesg stuff says it's getting mounted as sdb1, so why something is conflicting with sda1 I haven't a clue. My worry is that if I boot again off the flash drive and reinstall to sda1's drive, that I'll have the same problem and it has something to do with the flash being installed at time of installation.

Unfortunately I don't have a cdrom handy (was going to put a bluray in at some point), but perhaps I'd be better served putting the blueray in my XP machine for ripping my blueray disks and just using a normal dvd-rom in the xbmc machine instead of booting/installing from flash...
Reply
#7
Has anyone figured out a workaround for this issue?

I too have installed the live distribution to my IDE HDD. When my SATA drives are disconnected, the system boots fine. But when I connect one or both SATA drives, I run into the same issue when trying to mount /dev/sda1 to /mnt, which repeats every 0.5 seconds. As a result of the mount failure, I am getting the message "Did not find /mnt/rootfs.img".

I tried changing root=/dev/sda1 in syslinux.cfg to /dev/hda1, /dev/sdb2, etc, and it didn't make a bit of difference. The newer versions of the kernel will no longer recognize an IDE drive as /dev/hda1, but rather as /dev/sda1.

I tried booting into "safe" mode, in which case diskmount won't run, but that doesn't help.

I tried renaming /etc/init.d/xbmc to something else, but it doesn't appear it is even getting to the point of executing the scripts in /etc/rc2.d.
Reply
#8
using UUIDs is the way I'm planning to resolve this issue in the next rounds: you can try it yourself by booting into XBMCLive without the SATA drives, issuing:
Code:
sudo vol_id /dev/xxx1
where xxxx whatever is your boot partition and note the UUID in the listing.

Open syslinux.cfg and replace root=/dev/xxxx with root=UUID=YYYYYYYYYYY. or similar. Can't test the right syntax right now, you may encounter issues...

Let me know how it goes so that I can replicate your steps in the installer!

Cheers
Luigi
Reply
#9
Star 
kernel, kampkrusty,

Using /dev/sda1 in your syslinux.cfg and /etc/fstab is a bad idea if you have more than 1 drive. Say you have XBMC installed in the only drive in the system. Since there is only 1 drive, it will be assigned /dev/sda1 on boot up, and XBMC will boot fine. But if you install another drive for your media, and that drive gets assigned /dev/sda1 and your XBMC drive gets /dev/sda2 on boot up, XBMC will fail to load.

Here is a fool proof way of mounting as many IDE and SATA drives as you like, and always have XBMC boot off the correct drive every time.

Boot off the XBMC Live CD. Type mount to see which drives are mounted. Note the drives' corresponding device name. For example:

/dev/sda1 on /media/sda1 type vfat (rw,noexec,nosuid,nodev,fmask=0133,dmask=0022,uid=1000,gid=1000)

This means the device /dev/sda1 is currently mounted at /media/sda1.

Then type ls -l /dev/disk/by-uuid to see the UUIDs for the corresponding devices. The UUID for a drive is unique and will not change no matter how many drives are attached. From the above example:

lrwxrwxrwx 1 root root 10 2008-11-24 08:19 1234567890ABCDEF -> ../../sda1

This means the device /dev/sda1 has an UUID of 1234567890ABCDEF.

If /dev/sda1 is where you installed XBMC, make the following change to your syslinux.cfg located in /media/sda1. From the above example:

Replace root=/dev/sda1 with root=UUID=1234567890ABCDEF

Now XBMC will always boot from the drive with UUID=1234567890ABCDEF, which never changes no matter how many drives you add later.
Reply
#10
This is an issue when booting off the IDE drive with one or more SATA drives present. If Linux were to recognize an IDE drive as /dev/hda, I don't expect we would be seeing this issue.

Here's what works:

Boot XBMC Live Beta 1 from USB flash drive, regardless of whether 1 IDE and/or 2 SATA drives are attached
Boot XBMC Live Atlantis from IDE drive, regardless of whether the same USB flash drive is attached - but NO SATA drives can be attached

Here's what doesn't work:

Boot XBMC Live Atlantis from IDE drive with 1 or 2 SATA drives attached

Changing /dev/sda1 to UUID=... in syslinux.cfg did not make any difference. I had previously tried /dev/hda1 and /dev/sdb1, which didn't make a difference either. I would have expected to see an error related to /dev/hda1 not being a valid device. But it seems there is some issue occurring before it even reaches that point. I also tried setting UUID=... in /etc/fstab, but that too did not make any difference. That file does not appear to be touched until after the point of failure. For example, I set UUID=... for both SATA drives. When I boot with just the IDE drive, I get a file system check failure as those UUID's are not recognized, which is correct as they are not attached. When I boot with the IDE and only 1 of the 2 SATA drives attached, I don't see the file system check failure for the drive that is not attached, as I get stuck at the /mnt error.

It appears that when a SATA drive is attached, it is taking precedence over the IDE drive and is auto mounting to /mnt, prior to the squashfs trying to mount? I have grep'ed many files in /usr/bin, /etc/init.d, etc, and cannot find the statement generating the mount failure. Is the script that is trying to mount /dev/sda1 to /mnt embedded in the squashfs image? Can this script somehow be edited to check whether /mnt is already mounted, and subsequently unmount it if so, just to get past this error? I know this may not be the ultimate solution, but it would be nice to try. Of course, I don't know a thing about manipulating a squashfs file system. And as I also don't know much about linux boot loaders, could this be an issue with syslinux that grub/lilo doesn't experience?

Funny, when everything boots correctly, I cannot see any files or directories in the root filesystem / (even though I can cd to /etc, /var, etc), yet on beta 1 on the USB drive I can see everything.
Reply
#11
I had the same problem when booting with fixed SATA drives.
Editing syslinux.cfg and replacing root=/dev/sda1 with root=UUID=<UUID> worked for me. Thanks.
Reply
#12
Hmmm...I wonder why it's not working for me then. I have an ASUS P4PE motherboard with an Adaptec SATA Connect ASH-1205 PCI card. Maybe the fact that SATA is not built into the motherboard is having a bearing on this?

Is there a way to boot in debug mode, writing a log somewhere, to help troubleshoot this? Something in /var/log would be great, but there is no way to get a login prompt when this happens.
Reply
#13
I've discovered what the problem is.

My SATA drives are taking precedence over my IDE drive when device assignments are made. The following occurs based on what is connected:

(1) Both SATA drives and IDE drive:

/dev/sda <- First SATA drive
/dev/sdb <- Second SATA drive
/dev/sdc <- IDE drive

(2) One SATA drive and IDE drive:

/dev/sda <- SATA drive
/dev/sdb <- IDE drive

(3) IDE drive only:

/dev/sda <- IDE drive

The script "disk" in the initrd image searches for the boot disk in the order of hda, hdb, sda, sdb. It assumes the boot disk is the first one found, which is wrong in scenarios (1) and (2), but not an issue in scenario (3). When the script sees sda as a valid device, it assumes that is the boot device. When I changed the logic in the script to mount each device in succession (adding sdc to the list), testing for the existence of rootfs.img, the issue magically went away. Of course when it comes down to it, this is just a bandaid fix. But at least I can now get on to my flashing login screen issue.
Reply

Logout Mark Read Team Forum Stats Members Help
Installing Live on a system with a spanking new HD?0