The INS Group have been evaluating industrial Ethernet technology for some time now. The testing we've done have shown this new technology to be capable of handling the harsh environment and stringent data response times required for real time production critical applications. The benefits of this technology are considerable, no longer would production data destined for IT need to go through gateways running non standard communications protocols. Standard non proprietary network technology could be deployed that would leverage more cost effective Ethernet technology. A single standards based common network will be easier to maintain and controls support personnel will no longer be inhibited by dissimilar networks.
This document is the summarization of an IT funded project to evaluate the feasibility of deploying industrial Ethernet switch technology within Assembly Plants that will directly connect to the IT network infrastructure. The primary criteria being the switches must be compatible and interoperate with IT plant networks. The project also needed to consider how these new industrial networks would be supported and managed. Training procedures and documents were developed for support personnel. Ethernet IP addressing schemes for industrial devices was developed jointly with the IT Architecture organization. Numerous testing scenarios were developed to address the different Ethernet network architectures. The test scenarios and results have been documented here.
Project Deliverables and Scope
Overview of Industrial Ethernet, EthernetIP and Real Time Process Control
Historically the controls groups within GM have used many different proprietary networks for connecting controls devices; Allen Bradley’s DH/DH+, Modicon’s Modbus, and so on. There are also open standards based controls protocols like ControlNet, DeviceNet and Profibus that are also used extensively. These standards based protocols support/use the communications an information protocol (CIP). CIP is a true object oriented protocol with many device classes and their respective objects defined; devices that are used in GM plants (i.e. sensors, motors, controllers, TCP/IP interface object, etc.). ControlNet and DeviceNet applications in plants today that utilize the CIP protocol can be easily converted to EthernetIP in the future. CIP also can move data seamlessly between an existing ControlNet or DeviceNet network to an EthernetIP network which would make migration or intercommunication between these networks straightforward (no gateways or custom software required). CIP is also data link and physical layer independent. CIP encompasses the transport through application layers of the seven layer OSI model and maintains its own communications connections/sessions. CIP takes advantage of the more efficient UDP for real time applications and TCP for standard messaging.
In the past before Ethernet switch technology became affordable the idea of connecting devices to what then was a switched LAN (single collision domain), was not feasible. The real time (20 to 40 mili-second response) deterministic response times required by controls applications could not be attained in a switched Ethernet environment. With switch based full duplex networks collisions, which result in retransmissions, are almost entirely eliminated. If a collision is experienced the back-off and re-send time are also extremely fast and only the most demanding real time applications would be effected. Another potential problem is over-subscription of switches that control real time production critical applications. Proper design of the controls network is necessary and usually sufficient to avoid this problem. Vendors that develop products that adhere to CIP can seamlessly interoperate and be exchanged as desired. The benefits of this kind of flexibility are extensive and are surely one of the most attractive features of EthernetIP/CIP.
1) Physical Layer Requirements
a) DIN rail mountable; other methods of mounting must be approved by CRW/IS&S
b) Easy unencumbered view of status lights, end device and uplink wiring must not block view.
c) Small footprint
d) Environmental Requirements
i) Must be able to operate in 60 degree Celsius environment
ii) Should not require an enclosure, must be rated @ IP20 or greater
iii) Vibration: 2g
e) Cable Requirements Copper
i) Cat 5e
ii) Cable bundles for concentration points
iii) Redundant paths
iv) UTP, STP or both
v) Proper cable color, to identify the drop as a controls drop
vi) Connectors - straight, angled or both
vii) Location to write cable ID, port ID, port usage, etc for each port
viii) Gang port connectors to create faster switch replacement and eliminate reconnection error
f) Cable Requirements Fiber
i) Singlemode, Multimode or both
ii) Cable bundles for concentration points
iii) Redundant paths
iv) Location to write cable ID, port ID, port usage, etc for each port
g) Power Requirements
i) 24 VDC and or 110VAC
ii) Removable screw terminals for DC power and contacts
2) Layer Two (MAC) Protocol Requirements
a) Spanning Tree Protocol
b) Multiple VLANs
c) 802.1Q trunking compatibility
d) Interoperability with Cisco switches
e) Interoperability with Enterasys switches
f) Cumulative port speed should not overload/oversubscribe backplane
g) IGMP Snooping
h) Switches should be store & forward with packet error checking
3) Layer Three (IP) Protocol Requirements
e) IP addressable
4) Layer Four (TCP & UDP) Protocol Requirements
a) Telnet Capable
b) Browser capable
c) Control and Information Protocol (CIP)
5) Layer Seven (Application) Requirements
a) DHCP capable of assigning/renewing with static IP addresses
b) OPC capable
c) CIP capable
d) DNS capable
e) SNMP capable
f) Bootp capable
6) Network Management/SNMP Requirements
a) Standard MIB management; mandatory, MIB objects required
b) Private enterprise device specific MIBs a plus MIB objects
7) Switch Troubleshooting & Diagnostics
a) Local switch access for troubleshooting
b) Visual Diagnostic lights, two lights per port
v) Error indication by lights: Red error, Yellow warning, Green normal
vi) Error indication by cold contacts: Indicates loss of power or port link loss
c) Standard com console port; RJ45, 9 or 25 pin dshell
d) Remote switch access
iv) ICMP (ping)
e) Diagnostic Capabilities both Remote and Local
i) Show commands
ii) Speed & Duplex
iv) Type (i.e. 100FX, 10FX, 10BaseT, etc.)
i) Transmit errors
ii) Receive errors
iii) FCS errors
g) Switch Configuration
i) Remote: Telnet, Browser
ii) Local console port
iii) Configuration options
Testing Tools, Methodology and Test Scenario Format
Following is a list of test tools and what they were used for:
Following is the basic format of each test scenario:
i. Cisco industrial switch testing in a Cisco enterprise environment
ii. Chariot file format: “CCScenXTestXDateTime”
2. PLC file format: the workbook is CiscoCiscoPLCTestResults
i. worksheets are formatted: CCScenXTestXDateTime
3. Sniffer traces: CCScenXTestXDateTime
Cisco Industrial Switch Testing in a Cisco Plant IT Backbone
The following test procedures will test the integrity/interoperability of Cisco 2955 industrial switches that will uplink into access layer Cisco 2950 switches. These Cisco 2950 switches will collapse into Cisco 3550 route switches at the core layer. This network will then be uplinked at both layer 3&2 into a separate parallel Cabletron Securefast network. The following test scenarios and procedures will duplicate as near as possible the network environments that these Cisco 2955 industrial switches will be deployed at GM Assembly Plants. The primary purpose of this testing is to confirm that the Cisco 2955 does not run/generate any network protocols or traffic that will interfere with the Cisco Infrastructure. The Cisco 2955 will be tested to validate throughput, transaction rate, response time and fail-over response in various network scenarios/configurations. The Cisco 2955 will also be tested to verify required CRW features. Chariot End Points (EP’s) and sniffers will be used to generate network traffic. Ideally these tests will try to generate enough traffic load to adversely affect the PLC traffic; essentially trying to find an operating threshold. PLC’s will also be used to generate EthernetIP traffic, both TCP (Green Path) and Multicast/UDP (Red Path) traffic. The response time of controls traffic in these simulated network environments will be recorded and evaluated.
1. Determine throughput, transaction rate and response time between Chariot EPs connected to the Cisco 2955 and Chariot EPs connected at different switches. The EPs will be in the same and different VLANs. These tests will simulate inter zone traffic.
2. Setup test scenarios that load a local cell switch with PLC type traffic (produce/consume). What is the traffic limit on each type of switch?
3. Determine failover time when a primary uplink from an Cisco 2955 to a Cisco 2950 is broken. This test is being done to test spanning tree convergence time in a mixed vendor environment. Three different tests are needed;
a. Spanning tree running on the Cisco side only
b. Spanning tree running on the Cisco 2955 only
c. Spanning tree running on the Cisco 2955 and Cisco
4. Determine whether the Cisco 2955 is generating any protocols/traffic that may adversely affect the Cisco network.
5. Determine throughput, transaction rate and response time when traffic is routed between Chariot EPs connected to the Cisco 2955 and Chariot EPs connected to a Cisco 2950.
6. Determine throughput, transaction rate and response time between Chariot EPs connected to a Cisco 2955 uplinked to a Cisco 2950 and Chariot EPs connected to a Cisco 2955 uplinked to the same Cisco 2950, EP’s are in the same VLAN.
7. Verify that Multicast routing without IGMP snooping behaves like a broadcast
8. Verify that Multicast routing with IGMP snooping behaves like unicast packets within the multicast group.
9. Verify layer 3 & 4 switching; can tagging be done based on IPX, IP or TCP port #
10. Verify QOS with differentiated services works on the Cisco 2955’s and interoperates across the Cisco backbone.
11. Verify ability to set port thresholds for broadcast, multicast and unicast traffic.
12. HSRP failover testing; do all end devices come back up properly if routed links are failed
13. Verify that Bootp and DHCP operate properly across VLANs
14. Test SNMP management for different NMS packages (CiscoWorks, Spectrum, HiVision, etc.)
15. Identify potential choke points in plant architectures
b. Remote NMS
c. Switch & Router Uplinks
Test Result File Save Formats for the Cisco Test Scenarios are as Follows:
1. Chariot Files: Cisco_Cisco_Scenario#_Date_Time_# of ‘pairs’.tst
2. Sniffer Files: Cisco_Cisco_Scenario#_Date_Time.tst
3. PLC Files: Cisco_Cisco_PLC_Test_Results.xls
a. Worksheets: E_C_Scenario#_Date_Time
NOTE1: The date and time fields are the start date and time of a Chariot session. If a Chariot session is not being run than make sure the date and time fields are the same for the sniffer and PLC files.
NOTE2: Router interfaces on the 3550 switch/router must be setup as SVI/VLAN interfaces, except for Securefast VLAN’s which should be setup as actual physical layer three interfaces.