This talk will be an introduction to some of the storage technologies available today or in the near future in Solaris:
- In-kernel CIFS support
- Remote replication with AVS
- Iscsi and scsi (comstar) target support
- SamFS hierarchical storage management
- Highly scalable filesystems such as QFS and pNFS.
It is a slightly updated version of my talk from OpenSource Days. I don't have to hurry as much and have added a couple of additional slides.
Steve McKinty, Sun Microsystems will be in Denmark, because he is speaking at the Open Source Days 2008 conference and has offered to give an extra talk the day before the conference on Open High Availability Clustering. Steve has led The Solaris Clustering Geographic Edition team since 2005.
Open High Availability Cluster (OHAC) is the open-source code base of Solaris Cluster1, a high availability (HA) clustering solution from Sun Microsystems. The main difference between Open HA Cluster and Solaris Cluster is that Open HA Cluster doesn't provide an end-user product or complete distribution. Instead, it is an open source code base, along with build tools necessary to develop, build, and use the code.
The talk will take place Thursday October 2nd at the IT-University, Rued Langgaardsvej 7, 2300 Copenhagen S, in room 2A12 at 16.00
The term cluster is often associated today with High Performance Computing (HPC), but it also has a key role to play in the area of Business Continuity and High Availability. There is a lot of commercial interest in developing innovative solutions to these problems.
This talk will start with a brief description of the fundamentals of Business Continuity and how a High Availability framework such as Solaris Cluster can be used in that area, when combined with data replication technologies.
It will describe in more detail some of the work being done through the Open HA Cluster (OHAC) and related communities, and then will look at the technical details and progress of some projects that have been recently started by community contributors.
Lastly I'll discuss other opportunities for joint work and contributions with the OHAC members.
There will be time for questions on any aspects of Open HA Cluster and Business Community.
I ran into a small issue when I was live upgrading a Solaris 10 8/07 (update 4) to 10 5/08 (update 5). Somewhere well over six hundred packages failed to install according to the log. Looking closer at the log, it was all the same:
Cannot find required executable /usr/bin/7za pkgadd: ERROR: class action script did not complete successfully
Asking a bit around, I found that /usr/bin/7za appears as part of SUNWbzip but not until 5/08. The easy fix was to manually install SUNWbzip and starting the upgrade once more.
Other new features in 5/08 are listed here. For me the only really interesting feature is CPU capping which may come in hand for some of our zone servers at the ASF.
Update: Ryan Novosielski tells me that there's a patch to supply 7zip: 137321-01 for sparc and 137322-01 for x86. I suppose that's the official way (although the LU installer should supply it). Adding the package as I wrote before works just as well.
So why do I care? Both SAM and QFS could be immensely useful for work and
having access to the code rather than being stuck with a short term trial
makes it a whole lot easier to get a test system up and running. Also from a
long term perspective, much of what's happening in storage these days seem to
be happening around opensolaris.
Traditional storage vendors must be feeling the heat from Sun and it will be interesting to see what happens in an area where charging an arm and a leg for features seem more common than not.
Recently, Sun announced their latest CoolThreads servers based on the
UltraSparc T2 processor.
If you were familiar with the UltraSparc T1, then the new T2 is fairly similar, but without most of the limitations from the T1. Some of the most interesting news are:
- 64 threads (up from 32, still 8 cores)
- 8 fully pipelined floating point units (up from 1)
- 8 x crypto accelerator
- Dual 10Gbit Ethernet and PCI-E integrated onto chip (and the crypto accelerator to feed it full speed)
The 3 machines are:
T5120 - compares to the T1000 and fixes two of my largest complaints about the T1000 by having a redundant power supply and making room for 4 2.5" sas drives. All in a 1U package.
T5220 - compares to the T2000. Not much else to say other than there now being room for 8 2.5" sas drives. I rarely think I'd pick a T5220 over a T5120 unless I needed the extra internal drives (very unlikely) or wanted the 1.4GHz model.
T6320 blade - a blade version of the 5120 which is quite interesting because Suns blade enclosures let you mix and match between UltraSparc T1, UltraSparc T2, AMD Opteron and Intel Xeon based blades. Unfortunately, the 6320 blades appear to be unavailable for the time being.
At the ASF we have two T200s - see Out with
the old and in with the new. Eos and Aurora are quietly working along
through a decent workload and we're quite happy with them (allright, so Eos is
very busy and Arora is mostly there as a backup, but redundancy is a good
thing). The load usually only goes high when bots run wild or similar forms of
abuse hits, but most times we manage pretty well.
There is however another area where an UltraSparc T2 based server could do a whole lot of good and that is on our Subversion repository. Keeping it afloat is currently a bit of a task as there seems to be popping up more and more stupid Continuous Integration tools that keep hitting us like there's no tomorrow. There seems to be no end to the silliness such as trying to trawl the whole harmony tree every 30 seconds looking for updates (one bad example had 2 ips belonging to a large micro......... company hitting us with between 400.000 and 1 million requests/day). It doesn't often affect other users of subversion, but it would be very nice to be able to keep up with the load a little better and not have to firewall early on when we get hit. A T2 based server would give us a whole lot more headroom than the current dual processor box and thinking about moving everything to SSL is no longer utopia.
I'm sure it could keep up with the load (as long as we can find a suitable storage array to duct tape it to) as the performance figures from bmseer are looking pretty darn amazing.
Looking at the US T2 based servers from a work perspective, I'm also expecting to see a few of them in the near future (if I get a say). There are plenty of candidate systems on older and much less efficient platforms.
Next Page »
- Andrew Godwin - What can programmers learn from pilots
- New blog software and layout
- Today I made it into Flickrs TwitterTuesday
- bread meatloaf recipe
- Osso buco
- XKCD gets close to the truth
- Open Source Days 2010
- Autumn has arrived
- Recipes - Sottofiletto di Manzo al Pepe Verde and Pere al Vino Rosso
- Nearby parks