Pages - Menu

Sunday, January 29, 2012

Residual Gas Analyzer

Residual gas analyzers (also known as RGAs) are both small spectrometers and the name of the technique that effectively measure the chemical composition of a gas, and are commonly used in industrial processes to check for contamination. In order to gauge what comprises a gas, residual gas analyzers ionize separate components to create a charge, and then determine the mass-to-charge ratios. The process can take place inside a vacuum, where quality is easier to monitor and impurities and inconsistencies are easier to detect because of the low pressure. Setups such as this can be found inside accelerators and scanning microscopes. There are three main components in both types of RGAs: an ion source, a mass analyzer, and a detector. First, the gas is moved through an ion source, which turns molecules into ions. Then, the ions are sorted out according to mass by the mass analyzer, which does so by employing electric and magnetic fields. Lastly, the detector calculates the mass-to-charge ratio. The final display of data is referred to as a mass scan or a mass spectrum.

The ionizer, which turns the molecules of gas into ions, does so using electron impact ionization wherein an electron beam ionizes atoms of gas. A hot emission filament is responsible for creating the beam, but a magnetic field can destroy the filament and disrupt the beam. Because reactive gases, such as oxygen, can disrupt the electron flow, RGAs work best at low pressures.

After ionization takes place, the ions are sorted according to mass by a mass analyzer. Although there are numerous ways of analyzing ion mass, it is common for RGAs to depend on an RF quadrupole, which prevents ions with the incorrect mass-to-charge ratio for the given frequency from passing on to the ion collector. Depending on the sensitivity of the frequency range, either a Faraday cup or an electron multiplier may be used as an RGA ion detector.

The final data, or RGA mass spectrum, can be displayed as a chart of mass-to-charge ratio and relative intensity. Previous knowledge of how different molecules of gas with the same mass can have different mass-to-charge ratios can be helpful in identifying the gases.

When it comes to ionization, there are currently two types of RGA systems that work well: open source RGAs and closed source RGAs.

Open Ion Source RGAs It is not uncommon for vacuum systems used in production to operate at two distinct pressure ranges. Base pressure is often used to clean the vacuum and subsequent parts. Process pressure occurs at a higher range and when specific gases are added for a given process. Base pressure, if less than 1E-4 Torr (a unit of pressure that is approximately 1/1760 of an atmosphere), can make use of an open ion source RGA ionizer. Because open ion source RGAs can only handle a maximum pressure of 1E-4 Torr, and base pressure tends to fall below this figure, they can usually be attached directly to the vacuum chamber. They measure the gas present without changing the gas composition or altering the vacuum environment.

Closed Ion Source RGAs If pressure is between 1E-4 and 1E-3 Torr, then using a closed ion source RGA can help reduce overall process gas. A closed ion source RGA, which is a small ionizer, attaches to a quadrupole filter and has a tube with two openings: one for the electrons to enter and one for the electrons to exit. Alumina rings seal the tube, and the majority of the quadrupole is comprised of electrodes. As soon as the process begins and electron contact is intitated, the ions are formed. The actual ionization occurs at the same level of pressure as the process, meaning the pressure is the same in the rest of the chamber as it is where ionization occurs. However, the rest of the mass analyzer is under high pressure. Generally speaking, closed ion source RGAs operate between 1E-2 and 1E-11 Torr.

Saturday, January 28, 2012

Magenetic Liquid Level Gauges

A broad range of instruments are used for monitoring and maintaining levels within industrial fluid systems. Through the principles of visual, ultrasonic, microwave, or electromagnetic detection, liquid level gages can track and measure variations in liquid levels, enabling operators to set controls and keep systems within preferred performance parameters. While glass liquid level gages feature relatively straightforward design and operational characteristics involving direct line-of-sight monitoring and measurement, they can be less effective in applications that include hazardous or toxic fluids due to the potential risk of fracture or leakage in the glass components. Glass designs can also make it difficult to take readings at longer distances. In these cases, magnetic liquid level gages are a useful alternative because they are effective in handling toxic substances as well as providing long-distance indications.

Magnetic level gages do not depend on direct viewing of levels and function without the use of transparent glass components. This allows the measuring chamber to be constructed of opaque, welded metal parts, greatly expanding the operating temperature range and improved ruggedness and durability compared to glass chambers. A magnetic gage’s measuring chamber usually has the same coefficient of thermal expansion as the vessel being measured, allowing readings across a wider range of temperatures, which would be impractical if glass materials were incorporated into the system and allowed to interface with the metal chamber. These magnetic liquid level gages are an effective option for a number of fluid monitoring applications.

Magnetic Floats

For magnetic liquid level gages to perform successfully, the metal used to construct the measuring chamber needs to be nonmagnetic material, such as an austenitic stainless steel. A magnetic gage usually relies on a float within the measuring chamber to help provide level indications. This float is typically a permanent magnet and magnetic field detection methods are used to determine the float’s location, which in turn indicates the fluid level in the vessel, so the chamber itself cannot be magnetized for risk of interfering with the magnetic detection process. A magnetic gage float typically needs to be designed with a thick wall in order to function at higher pressures. The common methods for determining the location of the float include magnetostrictive transducers, magnet-operated flags, and magnetic followers.

Magnetic Followers

A magnetic follower is a tracking device that is usually mounted to the side of the gage’s measuring chamber. The permanent magnet inside the float lines up with the follower as the float moves up or down with changes in fluid level. The follower’s position and movements are measured against a scale to produce level readings. The strength of the magnetic attraction between the follower and the float sometimes causes a significant degree of friction against the measuring chamber wall. This may limit the level of resolution that is expressed as a discontinuous motion of the follower responding to level variations.

Magnet-Operated Flags

Like magnetic followers, magnet-operated flags physically track the movement of the float as it travels up and down inside the measuring chamber. The magnet within each flag causes it to flip in one direction when the float passes downward and flip in another direction when the float passes upward. Easily differentiated colors, such as red, orange, and yellow, are often used to mark the flags and to provide a color line for important sections of the level scale. Fluorescent colors may also be used to make the readings easier to see with a flashlight. These flags can, however, flip incorrectly due to bobbing of the float or rapid changes in fluid levels. These errors can often be corrected by passing a magnet externally along the flag, while float position can be determined with a compass if necessary.

Magnetostrictive Transducers

A magnetostrictive transducer can be used with a level detection system to provide accurate readings without the limitations presented by glass-based designs, or it can be installed into an existing magnetic liquid level gage to augment the functions of flags or followers. The transducer is a linear device that tracks the position of a magnetic field parallel to the transducer’s own sensing probe. In a magnetic liquid level gage, the float serves as a position magnet to produce the magnetic field. Magnetostrictive transducers are most effective when used in conjunction with a standard nonmagnetic metal tube enclosing a float and magnet system.

The waveguide is the core component in a magnetostrictive transducer. As a current pulse reaches the waveguide circuit, torsional force becomes induced at the point of the position magnet and a timer is activated. This torsional force produces a strain wave that travels through the waveguide until reaching the pickup where it is detected and the timer is stopped. The elapsed time recorded on the timer indicates the position of the magnet. There is usually little or no float friction because the waveguide’s diameter is relatively small, creating no magnetic attraction along the chamber wall. The lack of friction and the transducer’s ability to detect minute positional differences results in highly accurate readings. The measurements can be transmitted locally, remotely, on a standalone indicator, or as computerized input.

Thursday, January 26, 2012

Magnetic Flow Meter Principle

The purpose of a flowmeter system is to measure the movement, or flow rate, of a given volume of fluid and to express it through an unambiguous electrical signal. A standard flowmeter consists of a series of linked components that transmits signals indicating the volume, rate of flow, or volume of fluid moving through a specific channel, and it ideally functions with minimal interference from environmental conditions. A magnetic flowmeter is a relatively noninvasive measuring device that is well-suited for flow rate analysis due to its straightforward range of functions.

A magnetic or electromagnetic flowmeter can be installed in a comparatively simple fashion insofar as an existing pipe network can be converted into a measurement system by applying external electrodes and magnets. These flowmeters can track forward and reverse flow and are minimally affected by flow disturbances related to viscosity or density. They are linear devices that can be calibrated to measure a range of different variables while also reacting to changes in fluid movement. Progress in flowmeter technology has focused on producing devices that are smaller, less expensive, and capable of making more refined measurements.

Faraday’s Law

Like many other electrical devices, magnetic flowmeters function under the principles of Faraday’s law of electromagnetic induction. According to this law, a conductor that passes through a magnetic field produces voltage proportional to the relative velocities between the magnetic field and the conductor. The law can be applied to flowmeter systems because many fluids are conductive to a certain degree. The amount of voltage they generate as they move through a passage can be transmitted as a signal measuring quantity or flow characteristics.

The functional range for a flowmeter system is based on the movement of a conductor perpendicular to a magnetic field. For example, as a conductor of a certain length moves through a magnetic field with a specific flux density, it remains perpendicular to the field along the X, Y, and Z axes, producing a voltage across both ends of the conductor. This voltage will equal the conductor length times the field flux density and velocity. Faraday’s law extends to flow measurement because the conductor length in a fluid will equal the inside diameter of the flowmeter itself, and the basic formulas of electromagnetic induction can thus be applied to liquid flow rates.

For more information on Faraday’s law, please visit HyperPhysics.

Velocity and Voltage

When a flowmeter is installed and activated, its operations begin with a pair of charged magnetic coils. As energy passes through the coils, they produce a magnetic field that remains perpendicular to both the conductive fluid being measured and the axis of the electrodes taking measurements. The fluid moves along the longitudinal axis of the flowmeter, making any generated induced voltage perpendicular to the field and the fluid velocity. An increase in the flow rate of the conductive fluid will create a proportionate increase the voltage level.

Flow Profiles

Fluid movement within a flowmeter system can be characterized as square, with a turbulent fluid velocity; distorted, with weak upstream flow; or parabolic, with a laminar velocity. But regardless of the profile, a magnetic flowmeter will provide the average voltage from a metering cross-section, so that the signal transmitted to operators tends to closely reflect the average velocity of the flowing liquid. Given a fixed pipe diameter and a constant magnetic field, induced voltage will only correlate to fluid velocity. If the fluid has sensors attached to a circuit, the voltage will create a current that can be translated as an accurate flow rate measurement.

Although flowmeters are designed to provide as close of a linear connection between voltage and flow as possible, there are numerous factors which may disrupt this relationship. Possible sources of interference include:

Unintended extra voltage in the processing liquid. Electromechanical voltage accidentally induced in the electrodes or the fluid. Capacitive coupling between the signal circuit and the power source. Inductive coupling between the magnetic components in the system. Capacitive coupling between connective leads.

These and similar sources of external voltage or noise can disrupt normal flow measurement, so it may be worthwhile to set up a flowmeter under conditions as carefully controlled as possible.

Wednesday, January 25, 2012

Load Cell Basics

A load cell is a type of transducer that converts physical force into measurable, quantifiable electric energy. Because of the various types of load cells needed to operate different pieces of machinery, there are many configurations, but the most popular and the focus of this article is the strain gauge variety. This is a device which measures strain, and then transfers that force into electric energy which manifests as measurement for workers and scientists. Measuring strain effects helps preserve the integrity of the unit under pressure as well as protects equipment and people nearby.

How Load Cells Work

Load cells’ operating principles are threefold. They are hydraulic, pneumatic and strain gauge load cells. Gauge load cells are attached to structural bearing or support beam of an application that endures stresses and pressures, oftentimes with superglue or some other appropriate adhesive. When strain is put upon the bearing, the material’s change in tension exerts force upon the strain gauge load cell, which sends an electronic signal through a switching unit. This signal manifests as a measurement of the load, and reveals how much tension is being placed upon the unit.

Load cell display units also come with a variety of features. Contemporary displays are in digital format, and display tension forces as well as temperature, voltage to frequency comparisons and other important information about the application. The measurement is calculated by a complex equation based on the reaction of four different measurements of stress and compression. The display reads out numbers so that someone monitoring the application can determine if the stress is appropriate.

Load Cell Applications

Load cells are necessary for many load-bearing applications, both to maintain structural integrity and, in so doing, ensure the safety of people and environs. Large buildings, which sway in wind and contract and expand in different seasons, are huge, pressure-dependent structures housing hundreds to thousands of people, and under unsafe conditions, accidents can happen. Most buildings are designed to withstand impacts and natural disasters, and load cell strain gauges are set in place so as to monitor these conditions. For instance, brick structures, which are composed of interlaced building materials, require load cell strain gages to see if anything has shifted too much to pose a hazard. Although mankind has implemented innovational technology to decrease the likelihood of this happening, looking at any old ruins of a castle or wall can show the types of dangers brick structures can pose. In large skyscrapers, the many support beams and structural components often use load cells for similar reasons. Your office building probably contains many such units to keep the building under observation.

Other load-bearing applications include freight vehicles and docking locations which must sustain incredibly heavy loads on a day to day basis. The load cell might be monitored less in a vehicle, which is on the go and not subjective to constant analysis, but fixed-place applications like docks will undergo status checks frequently. Virtually any structure of similar type needs to be monitored to keep on an even keel.

Monday, January 23, 2012

Hyraulic Pressure Switch

Hydraulic Pressure Switches

Pressure switches are devices that convert pressure changes into electrical or linear energy based on transducer activation. They can turn on due to pressure decrease or increase, depending on the application. The two main types of pressure switches are triggered by hydraulic or pneumatic pressure, and are available in either contact or non-contact versions. Contact means the unit is physically placed within the unit to gauge the pressure, while non-contact means the unit uses advanced technology to measure the pressure based on other types of sensors while remaining outside of the container. Hydraulic pressure switches are often used in automobiles to alert drivers of the vehicle’s fuel levels, but there are many other applications that utilize the pressure switch.

How Hydraulic Pressure Switches Work

Pressure switches are typically contact switches, meaning they fit into a container of liquid to measure pressure. Because most units are very small, the displacement of the unit is factored into the measurement. The unit is composed of two sections—the transducer unit and the switch unit. The transducer is the piece of equipment that measures the pressure in the container, and can be set to identify ascending pressure, descending pressure, or, in some models, multiple pressure points. When the set pressure is met, the transducer sends a signal to the switch unit, which converts that message into electrical energy. This energy triggers the next step in the application. In a fuel gauge, for instance, this electrical energy can be used to activate a warning light if the pressure gets too low. In rocketry, scientists can monitor pressure to prevent explosions. In petroleum mining, workers can learn if they are in danger of hitting an air pocket and creating an explosion.

Pressure switches are available with different settings and range functionality, so it is important to select the correct switch for an application. Some liquids are more viscous, and this can confuse certain basic switches if they are not properly attuned to the liquid type. However, most hydraulic pressure switches can handle the majority of hydraulic fluids. Hydraulic pressure switches are also designed to handle different pressure levels. A typical hydraulic pressure switch operates between 300-2600 PSI, although higher levels of 4000 PSI and even up to 12000 PSI are available at greater cost. Average temperature ranges are between -20 degress Fahrenheit and 160 degrees Fahrenheit.

Hydraulic Pressure Switch Applications

Automobiles feature many hydraulic pressure switches, although this is true of any piece of equipment or vehicle that relies on hydraulic fluids, including aerospace devices and many other motorized vehicles. The applications are as simple as activating a braking light after detecting a rise in pressure of brake pipes. In airplanes, these signals can be very important to the pilot, as he or she needs to constantly monitor hydraulic fluids that operate plane flaps, lights, landing gear, and various types of coolant throughout the craft. Rocket propulsion operates on directional pressure release, so these types of switches are very important in launching satellites and spacecraft. These situations are much more volatile than a simple braking light activation, as improper pressure monitoring can result in explosions.

Sunday, January 22, 2012

Common Types of Limit Switches

Common Types of Limit Switches As its name suggests, a limit switch regulates the operations of machines that are equipped with moving parts connected to a switching action mechanism. A wide range of industrial machinery uses limit switches to control the movement of devices performing on a production line, but these switches are also found in non-industrial applications, such as electric motor operation and garage door opener units. In the case of a garage door opener, a limit switch is responsible for turning off the motor that lifts the door before the door crashes into the lifting mechanism. The switch also deactivates the motor as the door closes, preventing it from being pushed into the ground. Limit switches enable this and similar operations to work as semiautomatic processes by regulating an initiated action to keep it within performance parameters.

When installed in a machine system, a limit switch can usually start, stop, slow down, or accelerate operations, as well as activate a forward or reverse process. In order to perform these actions, limit switches are designed in a variety of shapes, sizes, and capacity ranges to accommodate differences in machine systems and production processes. A limit switch is typically composed of a series of electrical contacts coupled to an actuator that controls the mechanical device responsible for on and off functions. Limit switch instruments are employed in a broad range of applications due to their straightforward design, relatively simple installation requirements, reliability, and resilience in withstanding environmental conditions.

Types of Limit Switches

Limit switch performance depends on a number of factors. In addition to the operational parameters and mechanical specifications of a machine, these factors include the size, mounting method, and force capacity of the switch, as well as the stroke rate involved in the operating process. It is important for a limit switch’s electrical rating to match that of the system into which it is installed in order to reduce the potential for instrument failure and ensure proper functioning. The common types of limit switches used in industrial applications include:

Heavy-Duty Precision Oil-Tight: Also known as the Type C limit switch, this device is highly reliable due to its long electrical and mechanical lifespan. It features a straightforward wiring arrangement and relatively easy installation. The Type C can be equipped with a range of different head and body styles, including a more durable design that is watertight and submersible. It is available in a standard format, as well as with specialized reed contacts.

Heavy-Duty Oil-Tight and Foundry: When load requirements exceed the capacity range for a precision oil-tight switch, a regular heavy-duty oil-tight model, or Type T, may be needed. It can handle operating sequences unavailable on the Type C and can withstand high trip and reset forces. The heavy-duty foundry limit switch, or Type FT, is commonly used in foundries and mills where Type T operating conditions are coupled with elevated temperatures and foreign materials that may jam other types of switches.

Miniature Enclosed Reed: This limit switch, also known as Type XA, is a smaller and less expensive device formed from die-cast zinc. It contains a contact array featuring a hermetically sealed reed, which makes it well-suited for applications that require a high level of contact reliability or involve environmental stresses. The switch is normally prewired and can be placed in smaller or harder to reach areas.

Gravity Return: The gravity return limit switch is usually employed in production line and conveyor operations involving small, lightweight components. This type of switch relies on gravity to reset its contact switches by exerting force on a lever arm and typically functions with a low level of torque. There are several varieties of gravity return switches, including spring return, roller type, lever type, top push, and maintained contact designs.

Snap Switches: A snap switch is designed to instantly trigger as soon as the mechanism attached to the switch has moved a predetermined distance, regardless of the speed at which the moving part travels. Snap switches are commonly used in applications that require only basic contact parameters and can work with or without an operator. They are effective in machine systems that feature short movements or a slow rate of operation.

Limit Switch Circuitry

To better understand the way a limit switch circuit operates, it may help to look at an example that illustrates contact switching principles. A limit switch with a single-station, maintained contact design will have a “Start” button that mechanically controls the contacts. Pressing the “Start” button causes the mechanism to maintain the contact sequence that closes the circuit, while pressing the “Stop” button will open the contacts and break the circuit. If a system malfunction, such as power failure or overloading, causes the switch device to deactivate, the contacts will be unaffected and the motor will automatically reset.

For illustrations and information on the symbols used for designating contact sequences, visit Fundamentals of Electrical Engineering and Electronics.

Saturday, January 21, 2012

Touchscreen

Touchscreens are a popular, innovative technology applications found in various entertainment, communications and customer service devices. Touchscreens to control devices and use interfaces without the aid of instruments like physical mouse devices and keyboards. Touchscreen interaction works by either tactile digit or stylus pointing and gesturing. While the technology was heavily developed in the past two decades, touchscreens emerged in the 1940s.

Some touchscreen interaction methods require stylus use, while others can be operated simply by the human hand. These methods rely on different grid controls that pinpoint cursor location based on resistive touch, heat sensors, acoustic disturbance, and electric conductance. Because of certain ergonomic concerns related to user comfort and capabilities, there are general principles for designing and implementing touchscreen technology in certain non-mobile applications.

The Basics of Touchscreen Operation

There are three basic methods of touchscreen operation, each with several variations.

Resistive

A resistive touchscreen involves an electrical current that is disrupted by touch from both inorganic and organic instruments. There are multiple layers around this current: two metallic layers, one conductive and one resistive, which are separated by a very small space through the current flows. The metallic layers are above a pane of glass, and below a scratch-resistant layer. When an instrument such as a stylus or a finger touches the top layer, the slight pressure causes the metallic layers to connect. The computer elements of the touchscreen device can then calculate the precise location of current disruption, allowing interface operation.

Surface Acoustic Wave

Surface acoustic wave technologies transmit ultrasonic acoustic waves across a layer of reflectors. When a finger touches the screen, the waves are disrupted and the computer can calculate the touch location. Surface acoustic wave is generally one of the crispest image quality touchscreen technology options, because it does not use metallic parts and allows 100 percent light translucency.

Capacitive

Capacitive touchscreens incorporate electric charges beneath a charge-storing glass panel. When a conductive instrument touches the panel, the charge is directed by chips beneath the panel that determine the touch location. Capacitive touchscreens also provide good image clarity, because of the tendency to use glass parts.

Information about further distinctions between these touchscreen methods and their variations can be found at http://www.touchscreens.com/intro-touchtypes.html.

Other Touchscreen Information and Considerations

In addition to touchscreen type, there are ergonomic concerns for touchscreen user consideration. Because touchscreens are common features of mobile devices, arm strain is not typically a concern, but when applying touchscreen functionality to a mounted device, it can cause problems. Mouse and keyboard use for a typical desktop or laptop computer relies on the user’s arms being horizontal on the table, but a touchscreen requires the user to extend his or her arms and hold them aloft for longer periods of time. This can cause strain and exhaustion, and result in the user choosing a different device.

Additionally, touchscreens rely on various types of physical input. Because capacitive touchscreens operate via conductive touch, typically a user’s fingers, the screen can be dirtied by fingerprints. Most capacitive touchscreens have incorporated oleophobic coatings, which are chemicals that resist adherence to oils, specifically oils common on human skin. For other types of touchscreens that rely on a stylus or other inanimate tool for interaction, it is necessary to find scratch-resistant glass or coating for the upper layer of the device, so as to prevent dents and discoloration on the touchscreen.

Touchscreen devices are common on portable devices, such as cellular telephones, digital music devices, and handheld organizers, some of which are intended for use in rugged environments. Touchscreens rely on sensitive working properties, so extreme environment can have adverse effects on their behavior and performance. For example, touchscreens intended for cold environments should probably not be capacitive, because a user will probably be wearing protective gloves and will not be able to properly physically contact the screen.

Thursday, January 19, 2012

Measuring the Dew Point

Measuring the Dew Point

Environmental conditions can determine the effectiveness of many industrial applications, which is why monitoring the dew point is essential in many trades. For example, the dew point, defined as the temperature at which air moisture begins to condense, is a significant factor in HVAC, heating, venting and cooling technologies. It is also important in determining potential corrosion in metals, and in numerous chemical manufacturing processes. Because of its vital role in vast processes, precise dew point measurement tools have become a fundamental utility in everyday industrial functions.

Effective Dew Point Tools: Hygrometers

Generally, hygrometers, or cooled mirrors, have been the conventional tool used for precise dew point measurement. The device is considered to be a humidity transfer standard. The process entails cooling a mirror until water vapor begins to condense on the surface. The temperature of the mirror is measured. This projects the dew point of the air. This process is generally used in laboratory practices, and for monitoring the environments of storage venues. In addition to its use in material production (paints and glass manufacturing, for instance), the system is also effective in dry food processing.

While the mirror system is widely considered to be the most effective measurement process, its drawback is a tendency to become polluted. Because the apparatus is sensitive, it is necessary to clean the device to ensure continual results, but it may be costly to maintain. Inspection and maintenance can be performed with a mirror microscope, and typically the sensor can be opened manually via attached springs. Modern measuring hygrometers that have developed from the first manual chilled mirror hygrometer include more elaborate models, such as “self-checking” features that allow the device to detect and react to contaminants. Also, these devices are available in digital and allow for wireless readings. This process allows the device to be an equalizer of the condensation and evaporation on the surface of a mirror by using an electronic mechanism.

Lithium Chloride Sensors

Lithium chloride sensors are used because of their high reliability and relatively easy construction. They are advantageous over electrical humidity devices because they are not easily contaminated. Industrial uses of this device include measurements for dryers and refrigeration controls. In these applications, each sensor is composed of metal tubing walls saturated with lithium chloride solution and wound with wires that are connected to a power source. The device is not easily contaminated, and may be cleaned with ammonia and recharged with lithium chloride. Generally these sensors are used for industrial uses that require moderate accuracy.

Aluminum Oxide Hygrometers

Another common device for dew point measurement method is metal oxide apparatuses, also known as aluminum oxide technology. These devices are typically designed for low dew point measurements. They are generally small in size and can often be placed on walls or ducts in industrial settings. These sensors are effective in a wide array of industrial uses, as their multiple sensor features allows for a broad measurement range. Metal oxide hygometers are typically less accurate than the mirror device and are not considered efficient for long term use. The sensors are susceptible to environmental factors can be easily destroyed if they are exposed to damp conditions. Because of such sensitivity, regular assessment and recalibration of the tool, (often by the manufacturer) is necessary.

Polymer Sensors

Polymer sensors have been long used to measure the dew point and are also effective in calculating a wide humidity range. Typically applied in power industries and to measure petrochemical processes, these sensors are generally used to measure low dew point applications. The notable advantage of this application is its long term stability and efficiency in processes that require minimal maintenance.

Wednesday, January 18, 2012

Specifying Common Igniters

Specifying Common Igniters

Igniters are devices or assemblies that produce a specific level of heat in order to initiate a larger combustion reaction. Within industrial applications, igniters are manufactured for various engine and burner systems and applications including process heaters and high pressure washers. They are produced in simple and complex designs according to application use. This guide distinguishes the characteristics, functions and common issues associated with several of the most common industrial igniters, including pyrotechnic, hot surface and spark (or electrode) devices. Pyrotechnic Igniters

Pyrotechnic igniters are frequently controlled electrically. They are initiated to ignite materials that generally have complex ignition requirements. Thermites are apyrotechnic mixture of metal powder and oxide, which generates a reaction called a thermite response. While this reaction is not typically explosive, it can produce rapid bursts of high temperatures under the right conditions. This reaction’s higher temperatures are generally concentrated on a very small area for a short period of time.

Additional Considerations

These devices may require maintenance to adhere to safety standards, which should be verified through the manufacturer. In some applications, they may be demanding devices to operate as they require installation for individual engine tests. Hot Surface Igniters

Invented in 1969, these igniters are composed of advanced ceramic materials. These devices are also the most commonly used electronic ignition systems today. They are generally employed for applications such as space furnaces and heaters. Hot surface igniters are commonly used for their reliability and durability potential. Hot Surface Igniter Configuration

The two composition materials generally associated with hot surface igniters are silicon carbide and silicon nitride.

Silicon carbide is a compound of carbon and silicon and is characterized by a low density and oxidation resistance. This compound, seen in igniters, has good high temperature strength.

Silicon nitride is a chemical compound of silicon and nitrogen. It is a hard ceramic with a high strength and is durable over a broad temperature range. Its notable characteristics include durability over a high temperature range.

Additional Considerations

Because these igniters are made of ceramics, they are considered durable and thermally robust and may last from 3-5 years. However, they may gradually weaken over time and use and will eventually generate less heat than their full potential; they should be replaced when this occurs. Hot surface igniters may also experience premature burnout.

Spark Igniters

Spark igniters are also known as flame igniters, according to their application. Generally, they are considered efficient devices because they are easy and safe to handle. They are electric and no gas leaks are involved. Spark igniters function as a device that ignites compressed fuels, such as aerosol gas, petroleum gas that is generally liquefied, and ethanol. Some manufacturers produce spark igniters (also called spark plugs) that produce an ultra thrust ignition, which provides reduced emissions and a faster start.

A spark plug may be considered either hot or cold. The difference is hot spark plugs generally hold more heat in the physical tip of the spark plug, while cold spark plugs generate more heat out of the tip and lower its temperature. Spark igniters include a subcategory called chatterboxes. Spark Igniter Configuration Chatterboxes are considered the least sophisticated of the spark-igniter systems. Various manufactured chatterbox devices are self-cleaning. Spark igniters of this type are capable of igniting more than one burner at a time, and they can be controlled by an on and off switch. The spark is produced at a set of make and break contacts. These are made of tungsten for extended durability.

Tungsten is a steel gray metal that is distinguished by its robust physical properties. It has the highest melting point of all metals it its pure form, and is often utilized in rocket engine and vehicle applications. Additional Considerations

Sometimes a spark igniter will fail to ignite. A certain energy level must be maintained or the spark will dissipate. Manufacturers of these igniters suggest inspecting coloring of the tip (which should appear light brown) of the igniter block to ensure proper function. Changes in color and deformations may signal contamination or chipping, which can lead to misfire. To prevent a malfunction, tools like spark plug reading viewers are available.

Sunday, January 8, 2012

Computers in Automated Test Equipment

Computers in Automated Test Equipment (ATE) A computer system can serve as a useful tool for identifying and managing problems that may arise in electronic equipment. Through the use of add-on interface cards and specialized software, a wide variety of testing methods can be applied and numerous instruments can be attached to a computer in order to perform automated testing. Functional and in-circuit are the two most common types of standalone automated testing instruments. Functional testing assesses a device to determine its faults and is an effective method for evaluating printed circuit boards and subassemblies. It provides relatively rapid qualification checks and is accurate in detecting dynamic faults. Functional testing can successfully perform high-volume testing on printed subassemblies, but may require advanced knowledge of a device in order to set appropriate test patterns and programming.

By contrast, in-circuit testing is more of a diagnostic operation used to verify the quality of individual components in a subassembly. It operates by evaluating each component and detecting any failed parts or flaws. While this method offers detailed testing results, it may not always perform at the clock rate of the subassembly being tested and certain abnormalities, such as race conditions and propagation delays, may go unidentified. Both functional and in-circuit automated testing equipment can provide useful information in component quality-control processes and for evaluating performance parameters for test devices. When coordinated with computer systems, these testing methods can have their efficiency further improved.

Accessing Printed Circuit Boards

There are two major ways for automated test equipment to gain access to vital points on a printed circuit board. In the bed of nails method, a subassembly is mounted onto a dedicated testing apparatus using a vacuum or mechanical attachment process. Individual components on the board are examined with probes sent through essential electrical traces. Through a back-driving technique, input from each component is isolated from related circuitry and the component functions are tested. The process is quick but can be relatively expensive, making it better-suited for high-volume subassembly testing and maintenance.

An alternative method involves the use of printed circuit board clips in place of a dedicated testing apparatus. In this process, an operator mounts a series of clips onto specific locations on the board in order to check device functionality. Normally, it is not necessary to maintain simultaneous access to all components on the board, so the testing system relies on less intensive hardware and programming requirements. Some clip systems also include software packages that enable them to test multiple classes of equipment. Testing time for a standard printed circuit board clip may take up to twenty minutes, which is significantly slower than the operation rate for the bed of nails method, and the clip technique is usually more effective in low-volume applications.

Computer Interface Systems

In automated testing, there are several types of interface systems that can be used to connect computers to testing instruments. The general purpose interface bus (GPIB) and the standard serial interface (RS-232) are common nonproprietary interfacing systems that share numerous characteristics while also providing their own distinct benefits. GPIB and RS-232 can both connect multiple measuring instruments to a computer and both are bidirectional, allowing computers to send and receive data from external components. However, the GPIB interface is more frequently used among test instruments, with thousands of different instruments relying on this format.

RS-232 interfaces, by contrast, are more commonly found in computer applications, as well as printers, scanners, and modems. Remote sensing instruments, such as thermometers and frequency signal strength meters, are the most common types of testing instruments that use RS-232 interfaces instead of the GPIB alternative. Each interface format works differently depending on the application. RS-232 is often found already installed in personal computers, making it a popular choice for automation systems, while GPIB is more often seen in test equipment.

Automated Testing Software

Automated testing systems are often equipped with software packages that enable customization of instrument performance to meet application parameters. An operator inputs a series of codes for each instrument required for the testing process, sets the types of measurements that need to be taken, and determines how the testing results will be stored. After the configuration settings have been established, the software program compiles the processing codes for each instrument in order to initiate performance. This level of automatic programming minimizes the need for programming expertise on the part of the operator, and typically offers a simple set of graphic symbols for the user to manipulate.

Computer-Automated Testing Applications

Typical applications for computer-controlled automated test equipment include product qualification, data gathering, and troubleshooting, all of which rely on software that automatically controls the instruments. Data acquisition can sometimes be accomplished using only a computer and a single instrument. A computer can be designated to take numerous readings until a specific event occurs, such as a rise or drop in voltage that exceeds a preset limit or the elapsing of a preset time limit for the test. The computer then stores the readings from various testing points on the device. An automated system can also set its instruments to perform tasks that would be impractical or impossible to conduct manually while controlling the power supplies, generators, and frequency counters that may be included in the testing equipment.

Thursday, January 5, 2012

The History Of Zipper

Manufacturers produce zippers by the billions each year, but the device wasn’t always such a success. In the early stages of development, zippers went through design revisions, unsuccessful marketing attempts and a few name changes. Zippers are abundant today due to the tremendous patience of investors, an engineer who gave the product its crucial final touches and World War I, when the zipper was mass produced for the first time.

First Zipper Versions

The first semblance of a zipper model traces back to Elias Howe, the founder of the sewing machine. In 1851, he created a patent for a device named An Automatic Continuous Clothing Closure, which had a similar function to the modern zipper, although the composition was significantly different. The product operated as individual clasps that were joined manually, and pulled shut by using a string, creating a “gathered” effect. Ultimately, Howe did not continue developing his model, and several years went by before another patent was created.

More than 40 years later, inventor Whitcomb L. Judson began devising the patent “Clasp Locker or Unlocker for Shoes.” The design was essentially a guide (now known as a fastener or slider) that was used to close the space between a shoe’s clasps on one side to the attachments on the other. The guide could be removed after use, and had the double function of pushing the bulky clasps down and subsequently pulling them together to close. The guide was difficult to produce due to its very specific functions, and was also seen as time consuming.

Whitcomb’s second patent in 1893 was a transition from the former bulky clasps to hooks and eyes. This device, later called “C-curity” was a series of loops (short metal extensions) that were manually laced into the boot or shoe. The improvement was significant because the device functioned as a unit instead of as individual clasps. Eventually, it proved to be ineffective because it had a tendency to spring open.

Engineer Gideon Sundback ultimately enhanced the previous zipper models by devising a model called the “Plako fastener.” The design featured oval hook units that would protrude from the tape they were attached to, and provided a more secure fit than the previous “C-curity” design. Although the model had a tighter fit, it was not flexible. Also, it did not stay closed when it was bent and posed some of the same problems as the earlier hook design.

The Final Design and Production

In 1913, Sundback revised and introduced a new model, which had interlocking oval scoops (instead of the previously used hooks) that could be joined together tightly by a slider in one movement or swoop. This final model is recognized as the modern zipper, which took many months to find success in the industrial market. Retailers, who were prone to sticking with traditional materials and design methods, were slow to purchase the product. In the early stages of production, zippers were used exclusively for boots and tobacco pouches. During World War I, military and navy designers acquired zippers for flying suits and money belts, ultimately helping the reputation of the device’s durability. It was B.F. Goodrich, (which used the product for boots and galoshes in the 1920’s) that gave the device the name zipper, after the sound, or “zip” that the slider created.

Originally, manufacturers produced metal zippers, which are effective when used for heavy weight or thick materials. These metal zippers were made in aluminum, nickel and brass and were eventually incorporated into every day wear, such as denim. Designers accelerated the success of zippers with even more materials, such as plastic zippers, which are soft, pliable and easy to maintain. Gradually, manufacturers saw the product’s selling ability and versatility, and zippers, now available in a variety of materials and designs like coils and colored metallic, finally achieved widespread success.

Subscribe via email

Enter your email address:

Delivered by FeedBurner