The INS Group has been a key network provider in manufacturing for over 24 years. From 1996 to 2014 we were a tier 1 supplier to a major automotive manufacture. Over that time we had provided network architects, program/project managers, financial analysts, logistics managers and software developers to this company. Our team of network engineers made up 85% of their network architecture team (NAT) and were involved in network development for all of North America, Mexico and Canada during this time . Our teams were responsible for more than 500 network projects many of which were very large scale projects/programs which we architected, staged/built and managed the deployment of the entire network infrastructure. The sites involved were assembly, powertrain, stamping and other manufacturing facilities.
WiFi networks have become ubiquitous in everything we do, the IoT would not be possible without it. This comes from it's obvioius ease of use for clients but also requires a high level of expertise to develop a truly robust system that can meet the highly complex requirements of todays businesses. The very nature of what makes wireless networks so desirable, makes them in and of themselves highly insecure as well. They’re also highly susceptable to RF noise interference, requiring each WiFi access points (AP) channel/channel bond to not overlap frequencies with adjacent AP's. These are a few of the complexities, there are many more that have driven INS towards design/staging/deployment processes that are highly templatized and automated when practical.
The INS Group has designed, staged and deployed WiFi networks at assembly, powertrain, stamping plants, foundries, warehouses, and manufacturing campuses throughout North America, Mexico and Canada. Many of these projects replaced/updated the sites entire WiFi network. The manufacturing facilities were always very high risk which required well thought out detailed plans that considered all aspects of a WiFi project. A typical project includes requirements gathering, network design, design validation testing, design reviews, physical infrastructure, route/switch updates, security/FW, network services, integration with plant floor controls networks, application integration, deployment technical cut-over support, systems acceptance testing (SAT), audits of site physical and logical wired/wireless infrastructure.
Following are examples of an assembly plant were we upgraded the network multiple times from 1998 through 2014. The first upgrade at this site was a complete forklift replacement of the entire mix of broadband and shared Ethernet network infrastructure with the latest fiber, ATM and switched Ethernet. This project was part of a much larger program that upgraded more than 80 manufacturing and warehouse facilities throughout North America. Following is the high level project scope/cadence for this program:
1. New fiber optic distributed star topology backbone
2. New TC (telecommunications closets)
3. New above floor suspended platforms were required at all but four TC and IC/TC locations for cabinets, electronics, fiber PP’s, Cat 5e PP’s, power distribution units (PDU’s), AC units for Nema 12 electronics cabinets, and
4. New Category 5e drops for over 2500 devices.
5. A detailed IP addressing plan that properly segments traffic to isolate and secure production controls devices from plant IT traffic.
6. Detailed physical designs
7. Detailed logical electronics designs.
8. Bid process
9. Create Detailed physical BoM’s
10. Create Detailed electronics BoM’s
11. Create Detailed timeline
12. Create Detailed cadence/deployment cut-over plans
13. Create setup and execute PoC/interoperability validation test plans
14. Platform installation vendor should fabricate and install platforms
15. Installation vendors start installing fiber tube cable and pulling Cat5E drops to all of the MC/IC/TC locations
16. Create detailed training plans
17. Build, stage, confiure and test all physical and electronic hardware per design at customers network staging center.
18. Perform detailed SAT
19. Disconnect all connections to electronics, PDU’s, AC units and prep for shipment
20. Ship the network to the site.
21. Deployment at the site.
22. Network lead and their team will now test and validate that the network is functioning per design. Resolve any issues that may come up.
23. Create a complete documentation library
24. Perform a UAT (user acceptance test) with the plant and hand over all documentation.
25. Close out project.
The next upgrade came about seven years later with a scope to implement a new corporate Cisco based architecture. Since the previous project had upgraded the entire physical infrastructure this scope only needed to upgrade electronics and add some additional Cat 5e drops for any new devices. So this projects high level scope/cadence could skip items 1, 2, 3, 6, 8, 14, 15, 16, 21 will install electronics at site and 23 will update the library.
From a logical design perspective these were significant new technology upgrades being incorporated, and the business planned for applications to take advantage of this. The existing architecture based on Cabletron ATM core/distribution to Ethernet edge would be replaced with a new all Cisco gigabit Ethernet core/distribution based architecture was to be replaced with a new Cisco based architecture. One of the areas to be upgraded was the plant floor controls network. Prior to this upgrade communications from the controls networks needed to traverse bridges or gateways to reach the IT backbone. The new controls/robotics network architecture would incorporate a new Ethernet technology called EthernetIP (Ethernet industrial process) and CIP (common industrial protocol) had started to be rolled out several years earlier. The INS Group had played a key role in developing this new architecture and the PoC/Validation testing.
This next upgrade for this site started in 2012 under a larger program with a scope to upgrade thirty North American manufacturing sites from their current autonomous WiFi to controller based WVoIP capable WiFi. This project also replaced the existing Motorola PTT based radio system with a new 4G SmartPhone based PTT system. An INS Group officer was the program manager and technical lead for this program.
WVoIP PTT Project:
This was a strategic high visibility initiative to replace an existing legacy production critical PTT system using Motorola radios at 30 US manufacturing facilities. A 4G LTE/WiFi PTT solution was selected to replace the EoL Motorola system. The new solution used standard ruggedized PTT capable Smartphones that could run over 4G, WiFi, DAS or Metro-Cells. The manufacturing production critical WiFi networks at that time were autonomous (non-controller based) and not WVoIP capable. So the key deliverable for this project was to convert all 30 facilities to controller based WiFi WVoIP capable networks, a requirement of the new 4G LTE PTT solution. Since the current solution was being completely decommissioned this project was also required to replace over 20,000 old radio handsets with new ruggedized smart phones. The PTT systems at these facilities were not only production critical, but personnel safety/security critical as well. Plant personnel depend on these PTT phones during emergencies as their primary means of communication. Because of this security/safety requirement either cellular or Wifi service must be available on the new smartphones (as it was with the old Motorola radio system) in all areas of a facility inside or outside where personnel could be working. The new system must deliver a WVoIP Mean Opinion Score (MOS) of 4.0 — 4.5 audio quality in 100% of the buildings and 100% of perimeter grounds at each plant. To attain this level of reliability the proposed WiFi system must maintain the minimum 802.11n WVoIP (-67 db cell) requirement that must be verified by a full WiFi passive survey in order to determine the actual coverage. The new ruggedized Smartphones must also provide 100% reliable communications throughout the facility and it's perimeter with the ability to seamlessly roam onto a public service network.
From start to finish the program was completed in eight months, two weeks prior to the end of the current PTT service contract. The program was broken into two phases of 30 separate projects each, with six network engineers and six project managers assigned. PhaseI would deploy a separate (parallel) controller based WiFi network with new AP’s layer 2/3 switches and Cat 6 cabling. The new 802.11n MiMo based AP’s were co-located with the current autonomous APs. This was the most risk averse, least complicated solution albeit more expensive but when weighed against losing even eight hours of production was the best choice. Phase II meant going back and performing a passive survey of the new network and determining where if any additional AP’s might be required. This was kind of putting the cart in front of the horse but pretesting indicated that the new MIMO based AP’s have much better coverage than the existing non MIMO AP’s. When this program was completed the PTT Team had deployed over 180,000,000 square feet of production critical controller based WiFi WVoIP coverage, 20,000 AT&T Smartphones, 4200 AP’s, 60 WLAN Controllers, 800,000’ of Cat 6 cabling, configuration upgrades to all layer 2/3 swithes, core routers, FWs, WAN routers, and many long weeks and weekends were put in by the PTT Team, with ZERO downtime or lost units! This was a dual role project in which INS was the Technical architect/lead and PMO with a team of six network engineers and six PM’s. Monthly reporting of this project went to the board level.
The INS Group has been evaluating, testing and deploying industrial Ethernet technology for over twenty years. The testing done by the INS Group has proven this technology to be capable of handling the harsh environment and stringent data response times required for real time production critical applications. The benefits of this technology are considerable, no longer would production data destined for the enterprise network need to go through gateways running non standard communications protocols. Standard non proprietary network technology can be deployed that would leverage more cost effective Ethernet technology. A single standards based common network will be easier to maintain, and support personnel will no longer be inhibited by dissimilar networks.
Historically controls groups have used many different proprietary networks for connecting controls devices; Allen Bradley’s DH/DH+, Modicon’s Modbus, and so on. There are also open standards based controls protocols like ControlNet, DeviceNet and Profibus that are also used extensively. These standards based protocols support/use the communications an information protocol (CIP). CIP is a true object oriented protocol with many device classes and their respective objects defined; devices that are used in manufacturing facilities (i.e. sensors, servo/stepping motors, controllers, TCP/IP interface object, etc.). ControlNet and DeviceNet applications in plants today that utilize the CIP protocol can be easily converted to EthernetIP in the future. CIP also can move data seamlessly between an existing ControlNet or DeviceNet network to an EthernetIP network which would make migration or intercommunication between these networks straightforward (no gateways or custom software required). CIP is also data link and physical layer independent. CIP encompasses the transport through application layers of the seven layer OSI model and maintains its own communications connections/sessions. CIP takes advantage of the more efficient UDP for real time applications and TCP for standard messaging.
In the past before Ethernet switch technology became affordable the idea of connecting devices to what then was a shared LAN (single collision domain), was not feasible. The real time (20 to 40 mili-second response) deterministic response times required by controls applications could not be attained in a shared Ethernet environment. With switch based full duplex networks collisions, which result in retransmissions, are almost entirely eliminated. If a collision is experienced the back-off and re-send time are also extremely fast and only the most demanding real time applications would be effected. Another potential problem is over-subscription of switches that control real time production critical applications. Proper design of the controls network is necessary and usually sufficient to avoid this problem.
Vendors that develop products that adhere to CIP can seamlessly interoperate and be exchanged as desired. The benefits of this kind of flexibility are extensive and are surely one of the most attractive features of EthernetIP/CIP.
This document is the summarization of several major testing initiatives to evaluate the feasibility of deploying industrial Ethernet switch technology as a replacement of proprietary plant floor industrial networking protocols. These new EthernetIP based protocols will be able to directly connect to the enterprise network infrastructure. The primary criteria being the switches must be compatible and interoperate with enterprise plant networks. The project also needed to consider how these new industrial networks would be supported and managed. Training procedures and documents were developed for support personnel. Ethernet IP addressing schemes for industrial devices was also developed. Numerous testing scenarios were developed to address different industrial Ethernet network architectures along with switching and routing features that would need to interoperate.
The INS Group was part of a new ground up design and build of a large enterprise data center (DC) that was the second of a redundant pair architecture to enable active/active, active/passive and full failover capability for a tier one automotive manufacturer. Our primary role was to oversee, review and validate that the physical and logical designs met technical and business requirements prior to deployment. These two virtually identical DC’s were each over 50,000 square feet not including the administration and utility areas.
When the deployment program was completed, we started working on a new program initiative to develop private cloud type services that would overlay and synchronize the operations of two DC’s. With far reaching goals to automate services to reduce repetitive tasks, human error and streamline operational tasks.
In order to move towards an effective Hybrid Cloud strategy it’s important to clearly understand the proper distribution of services/features across the public cloud and corporate premises DC’s. Laying out a good service catalog that identifies the details of what is needed for the day to day operation of your hybrid cloud environment is crucial. A tool that is very useful for this is Amazon Web Services (AWS) Services Catalog. Once it is determined what services should be moved to the cloud and what should stay on premise there are many AWS services available to help attain your goals. Following are a few key services used for migration to the cloud:
· AWS Direct Connect – This enables your organization to establish an elastic VPI (virtual private interface) from on premise locations to your Amazon VPC (virtual private cloud).
· AWS DMS (database migration service) for quick secure database migration to your Amazon VPC. Your current database can remain operational while the transfer is underway.
· AWS Storage Gateway to be used to manage hybrid cloud on premise workloads for backup and restore and disaster recovery.
· AWS IAM can be used to grant employees and applications federated access to the AWS Management Console and AWS service APIs, using existing identity systems such as Microsoft Active Directory.
Whatever your DC or Hybrid Cloud needs might be, The INS Group can setup AWS cloud services to help accomplish your requirements.
Copyright © 2020 Integrated Network Solutions Group ( - All Rights Reserved.
Powered by GoDaddy Website Builder