Using a Coraid LN20/SR1521 with Plan 9Править

The SR1521 is a Plan 9 system installed on a PC that serves up to 15 750Gbytes SATA disks over 1000bT using AoE (ATA over Ethernet). It is built and distributed by Coraid. We use this as our main file server. It syncs the disks once every 5 seconds, so you don't need to run any software of your own in the SR, you may just use it as a rack of disks.

The LN20 is a PC, also from Coraid, that ships with Linux installed. But you can remove the Linux partition and install a Plan 9 system on it, to serve the storage in the SR1521. That is what we did.

Because the LN20 does not have a CD reader, we just installed Plan 9 on a disk using a regular PC, and then moved the disk to the LN20. It ships with a tiny flash disk in the IDE bus, but it can be replaced with a std disk if you want to (or you can install Plan 9 in the flash disk itself). We used a separate disk, but later we learned that Coraid distributes the pre-installed Linux from their web site, so it does not matter.

We made the LN20 boot from the local disk, as a Plan 9 CPU server, and then we customized /bin/cpurc to start extra services to export the AoE storage from the SR1521.

Installation is trivial, although you may need to patch up your Plan 9 kernel in the LN20 so that it supports AoE. Ask Coraid (or 9fans) for help.

First, in the SR, we defined several lblades (see Coraid's manual), using the SR raid software.

In the LN20's Plan 9, we used fs(3) to partition the huge disks in the SR. Partitions must be a multiple of (and aligned to) 512bytes. This is the start script that brings AoE up on our LN20 and defines some partitions for use:

# Three blades: 0.0, 0.1, 0.2
# 750G in raid1 each

bind -a  '#æ' /dev
bind -a '#l1' /net
echo autodiscover off>/dev/ctl
echo bind /net/ether1>/dev/ctl
for (target in 0.0 0.1 0.2) {
	echo discover $target>/dev/ctl
sleep 3
ls -l /dev/0.?/data

# Configure partitions using fs(3)

# In blades 0.0 and 0.1, our main stuff plus music:
# 25G main fossil, 25G planb fossil
# 50G other fossil
# 4 * ~12G index, ~600G arenas
# index spread in two disks
# The same for the "other" partition, also archived in venti

echo part mainfossil  /dev/0.0/data 0 26843545600				>/dev/fs/ctl
echo part planbfossil  /dev/0.0/data 26843545600 26843545600		>/dev/fs/ctl
echo part otherindex0 /dev/0.0/data 53687091200 13056032256		>/dev/fs/ctl
echo part mainindex1  /dev/0.0/data 66743123456 13056032256	>/dev/fs/ctl
echo part otherindex2 /dev/0.0/data 79799155712 13056032256		>/dev/fs/ctl
echo part mainindex3  /dev/0.0/data 92855187968 13056032256	>/dev/fs/ctl
echo part mainarenas  /dev/0.0/data 105911220224 644245094400	>/dev/fs/ctl

echo part otherfossil /dev/0.1/data 0 53687091200				>/dev/fs/ctl
echo part mainindex0  /dev/0.1/data 53687091200 13056032256	>/dev/fs/ctl
echo part otherindex1 /dev/0.1/data 66743123456 13056032256		>/dev/fs/ctl
echo part mainindex2  /dev/0.1/data 79799155712 13056032256	>/dev/fs/ctl
echo part otherindex3 /dev/0.1/data 92855187968 13056032256		>/dev/fs/ctl
echo part otherarenas /dev/0.1/data 105911220224 644245094400	>/dev/fs/ctl

Once this is done, we have disks partitions available to start venti and fossil. This is the script that brings up our fossil and venti (two of them). We raise the priority of fossil and venti processes because the kernel would not consider them high-priority processes by default.

# Venti and fossil servers from aoe disks
# cp aoefs /rc/bin/aoefs

# our main server, work.
echo main venti...
venti/venti  -c /cfg/venti.conf

# music and other stuff
echo other venti...
venti/venti  -c /cfg/oventi.conf

# This fossil serves it all.
#	main: main fs, against venti
#	planb: planb fs, against venti
#	student: old music fs, against venti, not archived
#	other: music fs, against oventi
#	sources: sources mirror, against venti
echo fossil...
fossil/fossil  -c '. /cfg/fossil.conf'

# old dump vacs from venti.
venti=tcp!localhost!venti {test -e /n/vac/dump || vacfs -c 100 /lib/venti/dump.vac}
srvfs olddump /n/vac/dump ; chmod a+rw /srv/olddump

# fossil starts with base priority 10. Make it 13, like the boot fossil
# otherwise things may be slow
ps | grep fossil | awk '{printf("echo pri 13 >/proc/%s/ctl\n", $2);}' | rc

This is the Plan 9 network at URJC


Contruner is our 4th edition file server. It uses a coraid SR1521 to supply 7.5Tbytes of raw storage, used to maintain a (venti backed up) fossil file system. DNS, DHCP, and auth services are provided by whale.lsub.org, which also keeps our old file system with a 500Gbytes venti. Aquamar, known as plan9.lsub.org, is the main frontend to the outside. It provides web and mail services. Two extra CPU servers are used as auth/file servers for student laboratories (hydra and leviatan). Most other machines are terminals, including those used at home.

The storage in contruner is maintained by using fs(3) to partition the SR disks. The SR takes care of maintain mirrors by itself, for reliability. 

Источник: http://plan9.escet.urjc.es/sr.html

Обнаружено использование расширения AdBlock.

Викия — это свободный ресурс, который существует и развивается за счёт рекламы. Для блокирующих рекламу пользователей мы предоставляем модифицированную версию сайта.

Викия не будет доступна для последующих модификаций. Если вы желаете продолжать работать со страницей, то, пожалуйста, отключите расширение для блокировки рекламы.

Также на Фэндоме

Случайная вики