I said that I was going to take httpd for a spin and see how much it could do on a T2000, but Colm MacCarthaigh beat me to it with some impressive numbers.
Later discussion on irc show that the numbers could probably be even better! than what Colm found in his testing - having turning keepalives on makes a whole lot of difference.
So the testing opportunities aren't quite over for me yet. I want to hit the machine on a consistent high load and then start tweaking little bits and pieces to see what happens to resource usage. There's also the beta of Solaris 10 Update 2 which I've been wanting to test anyway.
Many of the ideas I have for tweaking solaris itself came from a tutorial by James Mauro and Richard McDougall at last years LISA where they spoke about "Solaris 10 Performance, Observability, and Debugging". I haven't seen the slides from that talk anywhere but on the conference cd, but solarisinternals.com has a similar set of slides with a few extras. If you ever get a chance to attend a similar tutorial, I highly recommend doing so as I thought it was well worth the whole trip to LISA.
I've written earlier about ideas for using a t2000 at the ASF. Lately we've seen some very heavy hitting of mail-archives.apache.org which has very many gigs of mails spread over quite a few files. A T2000 alone wouldn't do it, but a T2000 on top of a large number of disks (not so much for spaces as for spindles) might very well do the trick. That's another thing that I'm hoping to get a clearer picture of with the T2000 that arrived at work yesterday (worst timing ever as I've got a couple of weeks off) because we plan to hook it up to a spare Hitachi if we can find a couple of extra HBAs.
Unfortunately I'm also hit by the "So many shiny toys, so little time" problem.
Update: Anandtech gets some different numbers from their type of testing. Their numbers doesn't seem quite as favorable as Colms, but they're running a different type of workload.
I've been looking for a machine to run opensolaris on at home instead of having to
fight for test machines at $work.
I wish it could have been a sparc based machine, but while Sun make some really sweet machines, they are neither cheap nor quiet. The release of the OpenSPARC T1 might solve the problem in the long run, but it is bound to take a while.
The solution for now is buying an AMD X2 based box and learning to live with the small deficiencies of Solaris on X86/64. It hasn't been easy though, because even with the Solaris HCL, there's very litte information to be found out there about what works and what doesn't.
My choice based on what I've found on blogs and really wanting a quiet machine is:
- Asus A8N-SLI/Premium, nForce4, S939
- AMD Athlon64 X2 3800+, S939, E6
- Asus EN6600LE/SI/TD/256MB, PCI-E
- Antec Performance P180
- ThermalRight XP 90
- Kingston DDR400, 2x1024 MB
Another ASF committer Dan Diephouse has been running tests on a T2000 with 4x 1 Ghz cores. He has some interesting webservices related benchmarks showing good performance and concludes that "it gets more than 5 times the througput of my Intel 2GHz Dell".
It would be interesting to see what he could get out of it with enough clients and a decent network. Another thing I'd like to see is some graphs for resource usage while running these tests.
Sun sent mail last week to confirm that they'd shipped a T2000 to us.
This 60 day trial isn't part of any official project, so it will just be a colleague and me running it
through whatever tests we can dream up.
We haven't talked too much about what to test on it other than hooking it up to a couple of T Hitachi storage that a kind customer let us to borrow (thanks!). It will be fun to see what filebench can pull out of a dedicated Hitachi. Maybe not worth much in the greater picture, since this is an older model of Hitachi, but hopefully it will allow me to find some good measuring points with dtrace to deploy in production and use as an early warning system and a debugging aid when running into san performance problems. Another thing will be rolling a couple of customer systems onto the box to see how they behave on it and try to see if it can keep up with a v490.
Last but not least, I hope to take it for a test run of httpd with the event and worker mpms. Solaris is well known to benefit greatly from the worker mpm because of its great threading implementation, so it will be very interesting to see just how far I can take it.
Another couple of interesting things to test will be the in-kernel ssl proxy and the crypto accelerator that the t2000 has built in. Not that I expect to be able to scrape together enough gear to give it any real challenge, but there's the whole Apache httpd integration to take a closer look at. I wonder if either of the two are any good with client certificates and making the cert contents known to the backend.
With a little luck I'll be able to run most of the tests on the beta of Solaris 10 update 2.
Unfortunately I probably can't publish any figures with this being $work related :(
Information and first hand experience on the Sun Fire T2000 is slowly beginning to appear from other ASF people. Sun sparked it off with their 60 day free trial and now things are beginning to happen.
Colm MacCarthaigh already got his T2000 and is putting it through some heavy testing. He says that it "can probably comfortably saturate a 10Gigabit/sec interface" which is not bad at all - I'm looking forward to seeing what other numbers will emerge from his benchmarking because Heanet move an impressive amount of data through their mirrors.
We're not signed up for the trial at the ASF. We've got enough
other things to do on the infrastructure side that playing with new hardware isn't really an
option. If anything, I would be tempted to sign up myself just to prove that HTTPD would really
fly with event and worker mpm rather than prefork that is shipping with Solaris.
I could easily see a use for a couple or more of T2000s if we could get them on a permanent
basis. Splitting services between .us and .eu would greatly improve our infrastructure and
with our current setup of having no hands on site, having hardware covered by a service contract
would be a great improvement.
Thinking with my "enterprise HA" hat on, it would be interesting to see how far a pair of T2000s set up with shared storage and Sun cluster could go hosting our services seperated into zones. Zone Migration is another thing that fits nicely into the picture and would make moving or cloning services between locations a whole lot easier.
« Previous Page
- Andrew Godwin - What can programmers learn from pilots
- New blog software and layout
- Today I made it into Flickrs TwitterTuesday
- bread meatloaf recipe
- Osso buco
- XKCD gets close to the truth
- Open Source Days 2010
- Autumn has arrived
- Recipes - Sottofiletto di Manzo al Pepe Verde and Pere al Vino Rosso
- Nearby parks