is what one
am starting a new job tomorrow putting in structured (cat 5e & 6)
and fibre cable which I have never done before. After watching these videos
and reading the material it has given me a great head start in the job.
Thanks for the information, money well spent."
Carrier Sense - When a device connected to an Ethernet network wants to send data it first checks to make sure it has a carrier on which to send its data (usually a piece of copper cable connected to a hub or another machine).
Multiple Access - This means that all machines on the network are free to use the network whenever they like so long as no one else is transmitting.
Collision Detection -
A means of ensuring that when two machines start to transmit data
simultaneously, that the resultant corrupted data is discarded, and re-transmissions
are generated at differing time intervals.
Here are some animated GIF's
to help explain basic Ethernet operation, below each one is a description
of what is happening.
This is a coax based Ethernet network where all machines are daisy chained using RG58 coaxial cable (sometime referred to as Thin Ethernet or Thin-net).
Machine 2 wants to send a message to machine 4, but first it 'listens' to make sure no one else is using the network.
If it is all clear it starts to transmit its data on to the network (represented by the yellow flashing screens). Each packet of data contains the destination address, the senders address, and of course the data to be transmitted.
The signal moves down the cable and is received by every machine on the network but because it is only addressed to number 4, the other machines ignore it.
Machine 4 then sends a message back to number 2 acknowledging receipt of the data (represented by the purple flashing screens).
But what happens when two machines try to transmit at the same time? . . . . . a collision occurs, and each machine has to 'back off' for a random period of time before re-trying.
For the sake of simplicity I have omitted the acknowledgement transmissions from the rest of the animation's on this page.
This animation starts with machine 2 and machine 5 both trying to transmit simultaneously.
The resulting collision destroys both signals and each machine knows this has happened because they do not 'hear' their own transmission within a given period of time (this time period is the propagation delay and is equivalent to the time it takes for a signal to travel to the furthest part of the network and back again).
Both machines then wait for a random period of time before re-trying. On small networks this all happens so quickly that it is virtually unnoticeable, however, as more and more machines are added to a network the number of collisions rises dramatically and eventually results in slow network response. Time to buy a switch!!!
The exact number of machines that a single Ethernet segment can handle depends upon the applications being used, but it is generally considered that between 40 and 70 users are the limit before network speed is compromised.
An Ethernet hub changes the topology from a 'bus' to a 'star wired bus', here's how it works.
Again, machine 1 is transmitting data to machine 4, but this time the signal travels in and out of the hub to each of the other machines.
As you can see, it is still possible for collisions to occur but hubs have the advantage of centralised wiring, and they can automatically bypass any ports that are disconnected or have a cabling fault. This makes the network much more fault tolerant than a coax based system where disconnecting a single connection will bring the whole network down.
To overcome the problem of collisions and other effects on network speed, a switch is used.
With a switch, machines can transmit simultaneously, in this case 1 & 5 first, and then 2 & 4. As you can see, the switch reads the destination addresses and 'switches' the signals directly to the recipients without broadcasting to all of the machines on the network.
This 'point to point' switching alleviates the problems associated with collisions and considerably improves network speed.
In the real world however, one or more of these machines will be servers, and as most network traffic is between the clients and a server a serious bottle neck can occur. The answer to this problem is to make server connections faster than the clients. The normal solution is to have the client machines on 100Mbs ports and the servers on 1000Mbs ports (Gigabit Ethernet). This ten to one ratio is usually adequate because not all of the clients will need to access the servers at the same time.
Can't find the answer your looking for? Search the Network Cabling Help Website