BABYL OPTIONS:
Version: 5
Labels:
Note:   This is the header of an rmail file.
Note:   If you are seeing it in rmail,
Note:    it means the file has no messages in it.

1,,
Received: by E40-PO.MIT.EDU (5.45/4.7) id AA16582; Tue, 2 Apr 91 18:48:26 EST
Received: from DELWIN.MIT.EDU by MIT.EDU with SMTP
	id AA18608; Tue, 2 Apr 91 18:48:10 EST
From: jon@MIT.EDU (Jon A. Rochlis)
Received: by delwin.MIT.EDU (5.61/4.7) id AA09838; Tue, 2 Apr 91 18:48:05 -0500
Message-Id: <9104022348.AA09838@delwin.MIT.EDU>
To: network@MIT.EDU
Subject: [jan@eik.ii.uib.no: Part III: FDDI status and performance]
Date: Tue, 02 Apr 91 18:48:03 EST

*** EOOH ***
From: jon@MIT.EDU (Jon A. Rochlis)
To: network@MIT.EDU
Subject: [jan@eik.ii.uib.no: Part III: FDDI status and performance]
Date: Tue, 02 Apr 91 18:48:03 EST


I don't think Ron already fowarded this one :-)

------- Forwarded Message

Date: Tue, 2 Apr 91 23:50:06 +0200
From: jan@eik.ii.uib.no
Message-Id: <9104022150.AA00577@alm.ii.uib.no>
To: sun-nets@umiacs.umd.edu
Subject: Part III: FDDI status and performance
Cc: jan@eik.ii.uib.no
Sender: Sun-Nets-request@umiacs.UMD.EDU


This is a fairly long article about FDDI. If FDDI don't interest you,
then please skip the rest.
- ---------------------------------------------------------------------

First, I'd like to sum up some of the things I've gleaned from reading
articles, vendor information etc. Then I'll include some of the information
people sent me after my first request to the Sun-Nets list.

1. FDDI status.
Most vendors already have available FDDI equipment. This goes for all
router vendors, and most of the workstation vendors. With the exception
of DEC, most of this equipment is for VMEbus systems.
FDDI is making it's final way through the standards bodies this year, with
SMT (Station Management) being the last part through.
Some things to look out for is:
	- The latest rev. of SMT is 6.2 (6.3?). Versions earlier than
	rev. 5.1 is not compatible with 6.2. You should make rev. 6.2
	a requirement for SMT. The other parts of the standard (PHY, MAC etc.)
	is already settled.
	- FDDI has a lot of configurable options. It's unclear to me
	how many you really have to worry about. (I guess very few, but
	perhaps somebody with practical experience can comment on this?)
	Probably the most important is selecting the Target Token Rotation
	Timer (TTRT). The station that offers the lowest bid upon ring
	initialization will become ring master. The TTRT depends on the
	physical size of the network (length and nos. of stations).
	- More problems will probably be caused by some of the stuff that
	is allowed, but not defined in the standard, as:
		- FDDI has a concept of PRIORITY, but its use is not 
		defined. This will be left for implementers and inter-
		operability testing. Same thing with synchronous traffic?
		- The use of dual-MACs (also called dual-homing), which
		basically doubles bandwith on a dual-ring system. (The
		secondary ring carries normally no traffic, until the
		ring folds.) Some vendors are offering this option now,
		but its use is not defined. The major problem is what
		action should be taken when the ring folds, and 
		a station sees traffic from both its interfaces on the
		same physical ring?
	- when looking at concentrator offerings consider particularly
	how your vendor handles the *concentrator-off-a-concentrator*
	case, usually called a tree-of-concentrators. For at least one
	vendor this raises the pr. port price by 60-80%. (In my mind
	this is the most sensible configuration, much to be preferred
	over the dual-ring.)

2. FDDI performance.
Most reports shows throughput between two stations is a long way from
being 100Mbit/s. The best news, however, is that almost all reports show
FDDI holding up very well under load, with almost no degradation of
throughput between two stations in the no-load (idle) and full-load 
situations. An important parameter to watch out for is host-loading
due to FDDI traffic. (Getting a 30% load on the host with FDDI throughput
of 20Mbit/s is bad news in my opinion. My guess is that this problem
possibly is worse on VMEbus systems than it will be on a Sbus system.)
	- older reports (ie. most published stuff was done in 88-89),
	reports throughput in 4-600Kbyte/s process-to-process. These
	test was mostly done on Sun-3's and early Sun-4's. This covers
	among others work by Sandia, UltraNet and possibly Cern. (one
	email to Sun-Nets early this year reported ~700Kbyte/s on 4/490.)
	- I've been looking for published reports, where measurements
	were done in late 1990, but they have been hard to come upon.
	This information is from sun-netters and unofficial vendor 
	statements (I'm waiting official corroboration on some of this):
		- Sun has reported disk-to-disk throughput of 15-18Mbit/s
		on Sun-4/490. The host-load was ~30%. (I've been promised
		more info here shortly.)
		- Cray has reported 12-16Mbit/s process-to-process between
		a Cray and a "typical workstation". Cray-to-Cray 24-37Mbit/s.
		- Dec has reported figures in the 30-35Mbit/s process-to-
		process with their Turbochannel interface card (unpublished
		report from CERN?). One source has hinted that this will
		probably raise to 50-60Mbit/s before the end of 1991.
		- Several sources has stated that Sun's Sbus interface really
		screams (sigh!). Almost everybody in fact expects that Sbus
		and Turbochannel FDDI interfaces will out-perform VMEbus
		machines. This is perhaps to be expected, since these new
		buses has theoretical bandwith at least 4 times the VMEbus.
		(Where will this put EISA-bus machines like HP 700 and
		IBM 6000? The EISA-specifications is somewhere in the 40Mbyte/s
		range.)
	- most published reports has less than 10% degradation due to network
	loading (usually 3-7%). This is good.

The parameters that seems to be most important for good FDDI performance can
probably be picked from the following list (neither exhaustive nor authorative):
	- mature FDDI chip-sets (arriving), efficient FDDI-host interfaces
	(writing/reading to memory, avoid buffering)
	- some believe in putting the protocol engine on the FDDI interface,
	others disagree, but at least some of the processing should be
	off-loaded from the host cpu
	- fast, mature TCP/IP implementations. Factors to get right are
	things like above, interrupts and context switches are costly, handle
	the normal case *well*, get MTU's right (and preferrably larger?)

I think one should bear mind when reading these figures, that most of them
has been arrived at using direct socket-to-socket communications, with test-
programs like 'ttcp' or UltraNets 'tsock', which is a rewritten 'ttcp'. Like
all benchmarking this may be nothing like what your application will see.

3. So where is desktop-FDDI?
I'm still surprised at how *small* the response has been. Fortunately, most
of the responses has been very informative. Some people has been through the
same process as us, some decided FDDI will not be on the desktop in 1991, some
went for UltraNet. (I should have written something about them too, but will
instead suggest those interested get hold of Marke Clinger of Solbourne's 
paper.) Very few people seem to be actually running sizeable FDDI networks at
the moment. Most FDDI-networks at the moment seem to be either backbone
networks, or networks dedicated to experimental works (visualization etc.).
This situation will not change before we get inexpensive FDDI interfaces
for the most popular workstations. Some generations of worksstations that
lack expandability will loose out on this (Decstation 2100/3100, Sun-3/50-60's
and SLC, many HP's.)

I believe we'll see FDDI on the desktop soon, and I confidently expect the
price to be in the US$ 1000 range during 1992 (and there's at least one
guy out there in net-land who is more optimistic than me :-). I also think
some of the FDDI host implementations will be quite good upon arrival. 

And if I should be wrong, then there are other work going on out there:
	- UltraNet must ultimately get on to the desktop too.
	- Some of the high-speed telephone technology might enter LAN world
	(one reply mentioned an experimental design for ATM on Turbochannel)
	- there are proprietary high-speed implementations like IBM's 
	fiber-optic link for the RIOS machines (2-400Mbit/s theoretically,
	but observed througput much less). But at least they use TCP/IP.

Thanx for your patience, and hopefully it will be somewhat useful. And to
the people who replied, many thanks for your help. Some obviously wanted to
remain anonymous, but I want at least to mention:
	- peter@goshawk.lanl.gov (Peter Ford)
	- whaley@ncsc.org (Jonathan Whaley)
	- datri@lovecraft.convex.com (Anthony A. Datri)
	- oconnor!miker@oddjob.uchicago.edu (Mike Raffety)
	- enger@seka.scc.com (Robert M. Enger)

And please do not hold them responsible for any -factual or other- errors.

Jan.

------- End of Forwarded Message


1,,
Received: by E40-PO.MIT.EDU (5.45/4.7) id AA16403; Tue, 2 Apr 91 18:29:49 EST
Received: from DELWIN.MIT.EDU by MIT.EDU with SMTP
	id AA18500; Tue, 2 Apr 91 18:29:38 EST
From: jon@MIT.EDU (Jon A. Rochlis)
Received: by delwin.MIT.EDU (5.61/4.7) id AA09801; Tue, 2 Apr 91 18:29:33 -0500
Message-Id: <9104022329.AA09801@delwin.MIT.EDU>
To: network@MIT.EDU
Subject: cisco AGS+ & FDDI
Date: Tue, 02 Apr 91 18:29:31 EST

*** EOOH ***
From: jon@MIT.EDU (Jon A. Rochlis)
To: network@MIT.EDU
Subject: cisco AGS+ & FDDI
Date: Tue, 02 Apr 91 18:29:31 EST


------- Forwarded Message

From: BIG-MOD%SUVM.BITNET@uga.cc.uga.edu

Date: 01 Apr 91 09:29:10 bst
From: S.Currie@edinburgh.ac.uk
Subject:      cisco AGS+ & FDDI

> Date: 27 Mar 91 09:11:00 EST
> From: "DAVE DOROSZ X-4161" <dorosz@afgl-vax.af.mil>
> Subject:      Cisco AGS FDDI card

> I'm thinking of buying a Cisco AGS+ router.  I would appreciate
> any comments on this product from anyone using it.  I am specifically
> interested in comments on the FSSI card, as I have heard that there are
> some problems with it.  Comments on how well  the card handles the FDDI
> protocol would be appreciated.

We have six AGS+ routers on our FDDI ring (which is 8074 metres long, 62.5
fibre). This has been up and running since December and now forms the
backbone for 42 ethernets, 25 of whihc are directly attached to the
routers. Each router is configured for 12 ethernets, except one which
has a couple of serial links to remote CGS routers.  We are routing IP
and DECNET over the ring, with Appletalk over a parallel ethernet
backbone (not yet available over the ring), plus other protocols being
bridged.

Comments : As the longest section of the ring is 3448 metres long, has
two splices and a patch cord in the middle, I guess there is plenty of
power from the optics.

We have had one card failure, and there is one software bug concerning
the 802.1 Spanning Tree Algorithm (in s/w release 8.1(25)) which I think
is fixed in the next release (due here this week). We run the DEC STA to
get round this. Otherwise our only problem is relatively high error
rates reported by some of the cards. The worst is about 1 in 2000 frames
received. So far we have eliminated the card as the source of the
problem and are investigating the fibre plant, but it could be the
optics within the box.

We have also tried interworking with a Wellfleet Link Node on the ring -
fine as a router, no-go as a bridge (both encapsulate I think).

Otherwise it all works, the ring wraps OK etc, and the debug feature on the
cisco is truely wonderful (unless you have megabucks to spend on an FDDI
monitor). We too have heard of problems under heavy load, but our loads
so far are light - we have no mechanism for imposing a high load. All I
can say is that so far "real" traffic has caused us no problems.

Scott Currie
Network Services Manager, Edinburgh University, Scotland


------- End of Forwarded Message


1,,
Received: by E40-PO.MIT.EDU (5.45/4.7) id AA14035; Tue, 2 Apr 91 14:51:55 EST
Received: from PADDINGTON.MIT.EDU by MIT.EDU with SMTP
	id AA17097; Tue, 2 Apr 91 14:51:47 EST
From: hoffmann@MIT.EDU
Received: by Paddington.MIT.EDU (5.61/4.7) id AA10340; Tue, 2 Apr 91 14:51:42 -0500
Date: Tue, 2 Apr 91 14:51:42 -0500
Message-Id: <9104021951.AA10340@Paddington.MIT.EDU>
To: network@MIT.EDU
Subject: cisco fddi in Scotland

*** EOOH ***
From: hoffmann@MIT.EDU
Date: Tue, 2 Apr 91 14:51:42 -0500
To: network@MIT.EDU
Subject: cisco fddi in Scotland

Date: 01 Apr 91 09:29:10 bst
From: S.Currie@edinburgh.ac.uk
Subject:      cisco AGS+ & FDDI

We have six AGS+ routers on our FDDI ring (which is 8074 metres long, 62.5
fibre). This has been up and running since December and now forms the
backbone for 42 ethernets, 25 of whihc are directly attached to the
routers. Each router is configured for 12 ethernets, except one which
has a couple of serial links to remote CGS routers.  We are routing IP
and DECNET over the ring, with Appletalk over a parallel ethernet
backbone (not yet available over the ring), plus other protocols being
bridged.

Comments : As the longest section of the ring is 3448 metres long, has
two splices and a patch cord in the middle, I guess there is plenty of
power from the optics.

We have had one card failure, and there is one software bug concerning
the 802.1 Spanning Tree Algorithm (in s/w release 8.1(25)) which I think
is fixed in the next release (due here this week). We run the DEC STA to
get round this. Otherwise our only problem is relatively high error
rates reported by some of the cards. The worst is about 1 in 2000 frames
received. So far we have eliminated the card as the source of the
problem and are investigating the fibre plant, but it could be the
optics within the box.

We have also tried interworking with a Wellfleet Link Node on the ring -
fine as a router, no-go as a bridge (both encapsulate I think).

Otherwise it all works, the ring wraps OK etc, and the debug feature on the
cisco is truely wonderful (unless you have megabucks to spend on an FDDI
monitor). We too have heard of problems under heavy load, but our loads
so far are light - we have no mechanism for imposing a high load. All I
can say is that so far "real" traffic has caused us no problems.

Scott Currie
Network Services Manager, Edinburgh University, Scotland
