Explain Norton's Theorem?

Explain Norton's Theorem?


Norton's theorem for linear electrical networks, known in Europe as the Mayer–Norton theorem, states that any collection of voltage sources, current sources, and resistors with two terminals is electrically equivalent to an ideal current source, I, in parallel with a single resistor, R. For single-frequency AC systems the theorem can also be applied to general impedances, not just resistors. The Norton equivalent is used to represent any network of linear sources and impedances, at a given frequency. The circuit consists of an ideal current source in parallel with an ideal impedance (or resistor for non-reactive circuits).

Norton's theorem is an extension of Thévenin's theorem and was introduced in 1926 separately by two people: Hause-Siemens researcher Hans Ferdinand Mayer (1895–1980) and Bell Labs engineer Edward Lawry Norton (1898–1983). Only Mayer actually published on this topic, but Norton made known his finding through an internal technical report at Bell Labs


Shove norton up your ringhole!


Norton's Theorem states that it is possible to simplify any linear circuit, no matter how complex, to an equivalent circuit with just a single current source and parallel resistance connected to a load