I’ve always had a soft spot for 3com in a way with cost effective campus level switching i.e. (it’s cheap but has some fancy quirks!).
The original 3Com SuperStack series (which is obviously no more since the take over by HP) was a good series line offering feasible access layer for top of rack switching. I sound like a salesman here? The main requirements I’ve always had from them is ingress based traffic queueing/shaping at port level (not ACL based), dot1q vlans and snmp writing ability. From an automation point of view SNMP within a data centre can be crucial giving automated panels and front ends a ‘common’ communication path to access switches.
Over time i’ve seen a lot of the 4400 series which proved to be solid and stable for many years but more recently on a more negative note they seem to die of death with the main issue being when their management interfaces seem to stop responding to ARP requests and you can no longer reach them or communicate with them in any shape, way or form. It’s not a common fault i’ve found from the internet but i’ve seen plenty to conclude that it must be, in most cases a simple power on/off seems to resolve it but after a while this act in turn takes its toll on the device and will eventually call it a day with corrupt boot ups.
What can we conclude from this, well, switches 99% of the time are stable and reliable but with expanding networks, increased traffic (especially broadcast) issues are going to arise – if it’s not broke don’t fix it. In my line of work whenever we’ve seen 3coms showing signs of this behaviour we immediately recommend Cisco or similar as a replacement – specifically the 2960 series.
Valuable info. Lucky me I found your site by accident, and I’m shocked why this accident didn’t happened earlier! I bookmarked it.
We have a raft of 3750s throughout the range from G to X, PoE and non-PoE, 24-48 ports. I hate them all with a paossin. Everything is a chore with this platform. Hardware failures and software failures relating to stacking are the particular bains of my life. Most of my recent weekends have been spent dealing with stacks of 3750s which have been either upgraded or been replaced due to faulty hardware. The single-integrated-PSU Gs with their horrific RPS system drive me mad. Yesterday I had to fly to an island to swap out a bust G which was running on RPS after blowing its’ internal PSU. First thing I did (after prep, of course) was to switch the RPS to standby to kick the failed G over either to internal PSU if it wasn’t fried, or dead due to lack of power. I wasn’t counting on this knocking off 2 other Gs in the same stack. A 5 member stack down to 2. WTF?! They can’t have been running on RPS as the stupid thing only supplies one device at a time, so why the hell did they power themselves off when it was disabled?! Franticly ripping the RPS cables out of the now dead switches (the RPS was in standby, remember ), removing and reapplying the power cable had no effect. Removing the stack cables completely and reapplying power DID however revive them. So after I repeated this for the not-really-failed switches and swapped out the one with the blown supply, I went to change my underwear.Other stacking woes? Removal of a stack member for upgrade (replace G with X), disconnected both stack cables (the to-be-replaced switch was powered off), but show switch stack-ports’ shows 3 of the 4 necessary cable endpoints to be DOWN. How the f Turns out that while it thought this member was still in the stack while it wasn’t (showed up in show switch’ as Ready), forwarding was being screwed for some reason (maybe it was punting traffic down a stack cable that wasn’t really there? :-/). We had to reattach and then detach the switch again in order to observe it properly disappear.Added an X to a stack a few months ago, it functioned perfectly except didn’t apply any QoS marking on any of its interfaces. No other hallmarks of any kind of problem. Just incorrectly dropped traffic which was pretty tricky to isolate. Reboot fixed that.StackPower whinging about unbalanced supplies in an identically populated stack with no hardware or power issues. Reboot fixed that.Spontaneous stack-split during the addition of an X to a stack of other Xs. A stack cable in the middle of the stack spontaneously appeared disconnected and caused a complete stack split while the stack ring was open at the bottom for the new member. That one could’ve been the fault of the guy that was doing the work but he denies it. I have no reason to trust him on this, however: fairly certain he’s the one who keeps bending fibres very tightly (to the point of the insulating sheath going white with stress ) to make them fit in cable management arms.When we finally replace our X installations with Nexus, we’ll be replacing the G installs with the Xs, which at least have dual modular power supplies. Shitload of work and still a massive potential for stack issues to bite us all though. And then I’m going to go Office Space on those fucking RPS.
Thanks for another informative blog. Where else could I get that type of information written in such an ideal way? I have a project that I am just now working on, and I have been on the look out for such info.
That’s odd. We have hundreds of 24 & 48-port units dyepoeld in our customer base, like hospital campuses and metro WANs and we see a very rare failure here and there, maybe less than 1%. Certainly nothing like the 23% reported here. Perhaps there’s something something particular to that extinct 12-port SFP model. The 3750X/3560X are pretty sweet. Are you deploying switches maybe without power filtering/UPS?
In the setups i’ve worked with they are deployed in rack cabinets within data centres (UPS, Generators, Air Con etc). The units themselves are fairly solid but it’s an odd bug that i’ve seen time and time again that overtime sometimes after a year the management interface will stop responding to arp leaving you unable to reach the management interface.
In my opinion it only appears to occur on busy LANs.