SUMMARY: FDDI vs. CDDI

From: Ying-Meei Chew (ymc@altera.com)
Date: Thu Sep 14 1995 - 10:53:51 CDT


Thank you to all who responded with pieces of advice or experience. It
seems that the response are a mix. Some prefers FDDI and some CDDI.
Therefore a short summary is impossible. I will inlcude the responses
below instead.

As for my site, after reading through all the reposnses, we decide to go
with the standard FDDI.

Thanks again to all the people below who responded.

Ying-Meei Chew
Network Administrator
Altera Corporation

************************************************************************

From: Hans van Staveren <sater@cs.vu.nl>

We have Interphase FDDI over twisted pair cards. No problems.

Hans

-----------------------------------------------------------------------
From: owens@xylan.com (Mark Owens)

We have been using NPI FDDI (DAS/SAS) and CDDI cards for about 6 months
in our network.... we've had good experiance with both CDDI and FDDI.
Our early experiance with CDDI was shakey in that the early drivers
from NPI had problems and that the early cards did not work in some
of our machines (SPARCclassics to be exact). Currently we have
6 workstationd on FDDI/CDDI interfaces and have no problems......

Things to watch out for:

1) you may have to set lower the MTU on the F/CDDI interface to match
   ethernet if you are in a mixed media environment.... In this case
   you MUST make sure that all of the machines running F/CDDI have
   the same MTUs.

2) the CDDI interface needs CAT5 cabling and will not push out as far
   as 10baseT (distance wise), so if the workstation that is on CDDI
   is at the far end of a long run you may have trouble....

3) IBMs implimentation for CDDI is based on a 2 year old draft and
   does NOT interoperate with the rest of the world....we found this
   out at a customer site after putting a 'scope onto the wire and looking
   at voltage levels....BIG INCOMPATABLITY PROBLEM!

---------------------------------------------------------------------------
From: Glenn.Satchell@uniq.com.au (Glenn Satchell - Uniq Professional Services)

What about 100 MB ethernet as another alternative? The cards are much
cheaper than either FDDI or CDDI, and the driver comes with Solaris 2.4.

----------------------------------------------------------------------------
From: dave@exar.com (Dave Haut)

I am also looking into FDDI and CDDI for our servers here. This is what I have
found out so far:

Sun sells an FDDI card made by NPI ( network peripherals ). It takes two Sbus
slots and from what I hear is not very good. Interphase also makes an FDDI
card for the sun that takes only one sbus slot. This is what we are leaning
towards.

Also be aware that although CDDI is considerably cheaper than FDDI, the max
speed for CDDI is 100 Mbytes, so if you go with CDDI you may face a
significant upgrade cost if you decide to go to FDDI in the future to use
a "UP AND COMING" new protocol like ATM ...

----------------------------------------------------------------------------
From: eric@xylan.com (Eric Peterson)

        There are a couple considerations to look at:

        1. Robustness: CDDI is typically wired up as a SAS (single-attach
        station) "tree" configuration; FDDI also has SAS, but more typically
        uses DAS (dual-attach station) wired in a "ring". The tree
        configuration looks more like what you may do with Ethernet hubs. The
        ring configuration is used to get extra reliability (there are actually
        two rings, one of which sits idle until a break occurs in the ring at
        which time it comes into use. Here are the more common
        configurations:

        Ring:

                              DAS
                            // \\
                           // \\
                        DAS DAS
                           \\ //
                            \\ //
                              DAS

                If ring breaks, the ring wraps on itself, maintaining
                connectivity. This is what most people think of when you
                say "FDDI".

        Tree:

                    Concentrator
                   / | | \
                  / | | \
                SAS SAS SAS SAS

                When a SAS disconnects, the concentrator internally
                bypasses it.

        Ring-of-Trees:

                               DAS Server
                               // \\
                              // \\
                    DAS Server DAS Server
                        || ||
                        || ||
                    Concentrator=========Concentrator
                   / | | \ / | | \
                  / | | \ / | | \
                SAS SAS SAS DAS SAS SAS SAS

                The servers are on the dual-ring for added reliability, clients
                sit under the concentrators. Note the DAS station hooked
                to both concentrators. This configuration is known as "dual-
                homing" and gives redundancy to that station in case one
                of the concentrators dies, or cables to it breaks.

        2. Cost: FDDI DAS boards may be a bit more expensive than the
        corresponding CDDI boards. SAS boards are probably comparable. Even
        more important, though, may be the cost of cabling -- FDDI requires
        fiber, CDDI uses CAT-5 wiring.

        3. We are currently running FDDI (DAS & SAS), and CDDI from our
        Suns. The cards we're using are from NPI and seem to work fine.
        (Shameless plug: We're feeding them from our OmniSwitch which
        handles FDDI/CDDI as well as Ethernet, token-ring and ATM).

------------------------------------------------------------------------------
From: Simon-Bernard Drolet <droletb@CCG.RNCan.gc.ca>

We've been using SysKonnect CDDI sbus card in three of our Sun Server (Sparc20).
I's working like a charm...

------------------------------------------------------------------------------
From: Simon Bannister <gw3883@nomura.co.uk>

We use the Crescendo CDDI cards on all our SUN servers and clients. The only problem encountered has been slow NFS performance but we are yet to prove that it is the CDDI cards and drivers causing the problem. In general terms if you can afford it I would recommend that you go FDDI as I beleive that CDDI has never been ratified by the IEEE.

------------------------------------------------------------------------------
From: bobr@cassie.Sugar-land.Wireline.SLB.COM ( Bob Reardon )

  Here is a summary of recent experience we had installing fddi on
an SS1000E server.

Here is the problem we originally reported:

  We have recently tried and failed to install a Sun single-attach FDDI sbus
adapter on a new SS1000E with 50Mhz backplane and 60Mhz cpus. The OS is
Solaris 2.4 with recommended patches. We started with 101766-06 which was
recommended in the fddi install kit. The fddi registered very large numbers
of input errors in netstat and could not output (write files to other
hosts). Sun replaced the fddi card. Same results with new one. Sun then gave
us patch 101766-10; after it was installed netstat registered fewer errors
on input but still could not output successfully.
  The hub we're attached to is a Synernetics (now 3Com) that cannot handle
mtu path discovery, so that was disabled via ndd -set /dev/ip
ip_path_discovery 0 in /etc/init.d/inetinit during all testing.
  We obtained a Cisco fddi card (model C301T) and installed it in place of
the Sun card, and removed the Sun driver package and installed the Cisco
driver.
  The Cisco (formerly Crescendo) adapter ran immediately with no apparent
problems. We have a 670MP with an earlier C301M Crescendo card that has
run for 2 years without any trouble.
  I would like to hear of any other experiences with FDDI on SS1000E and will
publish a summary.
===============================================================================
SUMMARY:

        Our solution was to buy the Cisco C301T sbus fddi adapter card. After
initial testing, it has been running on the SS1000E server for a week without
any problems.
        I received a reply from Bob Beauchamp that supplies technical
details as to why we would have problems using our Synernetics hub with
the Sun fddi card. If you plan to use 3Com hub/Sun fddi combination you
might want to contact Bob first.
        Gary Merinstein has installed fddi on about 25 SS1000E's and used
Cisco cards on all of them after having problems with Sun cards. He
uses Cabletron hubs.
        Anchi Zhang had similar problems and now uses Cisco cards on all
SS1000's.
        Scott Kamin reports success using the Sun fddi card in conjunction
with a Synoptics hub.
        Jon Diekema reports success using the Sun fddi card of a Sparc1000
but did not say what kind of hub he uses.

----------------------------------------------------------------------------
From: jdr@fns.com (Jim Radcliff)

 Our environment is mixed with ethernet subnets, cfddi subnets and fiber
between buildings. We are using CISCO cfddi hubs becuase they have
cards to support both ethernet and cfddi in the same "box". We use
"Cisco" and "Network Peripheral (NP)" Sbus cards. We have found Cisco
software controllers easier to install. We have 72 cisco cards and 24
NP cards and the only failure we had was 1 np card. We do use only the
cisco cards in the servers becuase we feel they are more reliable , but
no real proof (cisco is about 40% more expencive). We are using the
sbus card only on SUN with both Solaris 2.3 and SunOS 4.1.x

----------------------------------------------------------------------------
From: " Rogerio Rocha - BVL - Lisbon Stock Exchange -I.S." <rogerio@bvl.pt>

We are not using either , but I will give just some pieces of
advice :
a) distance - FDDI can go 2km , vs CDDI's 100m's;
b) FDDI is more robust (imune) to noise and tamper;
c) FDDI is more expensive to deploy the cabling;
d) test and repairs are more expensive w/ FDDI;
e) Fiber can be hard (even impossible) to route in
some parts of a building.

Other points I would also look are :
1- What cabling there is;
2- how will networks interconnect (new and old);
3- Growth path - what can be done and what are the costs;
4- Will all equipement be on that network, just part of them,
is there a prevision of other equipment other then SUN be on it,
is there the HW/SW for them ?

In short no single solution, but personally I would love to
have the possibility to have a mixed installation, that is, Fiber
inside the walls and copper to the desks and all that playing
together.

--------------------------------------------------------------------------
From: koen@ciminko.be (Koen Peeters)

If you are setting up a new infrastructure I would suggest you to invest in ATM in stead of FDDI or CDDI.
ATM is the networking infrastructure of the future.
ATM requires your wiring to be a star topology. All wiring that does not follow this topology will have to be changed in the long run.
155Mbps can run on both fiber or UTP cat5 cabling.
The switch prices are still a little high though at the moment.

--------------------------------------------------------------------------
From: Joseph Youn <joe@info.ac.hmc.edu>

We have been using a FDDI card on a sparc 10/514 for over a year
without any problems. As far as CDDI, our CS dept. was evaluating
a CDDI card, but apparently it had compatibility problems with the
backplane in a sparc 1000??? Both boards were manufactured by
Creschendo, which I believe was taken over (but I could be wrong).
I haven't heard of anyone who has had a good experience with a
CDDI card, but it could have been that we just needed to hack at
the CDDI a little more...

But, in my experience, I would say "Go with FDDI".

-----------------------------------------------------------------------------
From: Sam Howard <Sam.Howard@dataserv.com>

We implemented a CDDI ring amongst our Sun Sparc servers (and a Network
Appliance FAServer), and it seems to be running fine.

The only strange thing is (and I have not yet tried to track this down) that
the Sparc 5 (only S5) in the group returns a constant ping time of 10ms
while all others (Sparc 20's of various shapes and flavors) return ping
times of 1-2ms. Also, using the CDDI interface on the S5 seems to really
beat on the CPU, so I am using the CDDI as the backup network interface, and
using the Ethernet as the primary (opposite of the S20's).

We have been running on this since the beginning of August, and the only
problem we have had is one DOA (or shortly thereafter) SBUS card.

We bought the SBUS cards and the concentrator from Cisco (we already have
Cicso routers, et al), but they are just Chrysenda (sp?)...the company, I
think Cisco bought???

One point of interest, the patch cables you need are NOT standard Ethernet
Cat 5 TP cables...you have to flip the outer pairs (I don't have the pin-out
in front of me, but I can get it if you want...). We got hit by this and
had to lop one end off of each of our to-be CDDI cables and re-end
them...not a problem if you have the tools and the pin-out, but certainly a
pain in the butt since I paid to have them made special...grrr!

I'd be interested in anyone else's experiences with this since it is still
relativly new to us, so please summarize to the net or back to me...Thanks!

----------------------------------------------------------------------------
From: leclerc@austin.asc.slb.com

Some would argue about the viability of CDDI/FDDI with forthcoming 100BT.
100BT has the advantage to use UTP level 5 as CDDI : no rewiring cost
when/if switching technology.
On the other hand, if your LAN is close to electromagnetic fields emitter,
FDDI is the best. If your LAN is largely spread FDDI becomes valuable.



This archive was generated by hypermail 2.1.2 : Fri Sep 28 2001 - 23:10:33 CDT