Designing to Fit the Moving Body

Katrin E. Kroemer Elbert , ... Anne D. Kroemer Hoffman , in Ergonomics (Third Edition), 2018

9.5.1 Work in Restricted Spaces

There are times when work must be performed in restricted spaces, such as in access tunnels, tanks, and mines. The primary restriction usually lies in the lowered ceiling of the workspace. Work becomes more difficult and stressful as the ceiling height forces workers to bend neck and back, or requires squatting, or even lying down. Thus, if restricted spaces are unavoidable, equipment and mechanical aids should be developed that alleviate the human's task. For example, in aircraft baggage handling, it is advantageous to first collect the luggage in containers and then put these containers into the cargo hold, rather than loading individual pieces into the cargo hold.

Other examples of space-restricted spaces are passageways, walkways, hallways, and corridors. Minimal dimensions for these are given in Fig. 9.13. For tight places, where one may have to squat, kneel, or lie on the back or belly, dimensions are given in Fig. 9.14 and Table 9.2. Dimensions for escape hatches, shown in Fig. 9.15, need to accommodate even the largest workers wearing their work clothes and possibly equipment. These openings can be made somewhat smaller for maintenance workers who need to get through access openings in enclosures of machinery; recommended dimensions are shown in Fig. 9.16. The size for openings through which one hand must pass, holding and operating a tool, depends on the given circumstance; some recommended dimensions are shown in Fig. 9.17. These dimensions need to be modified if the operator also must see the object through the opening, and if special tools must be used and movements performed with one hand. In some cases, both hands and arms must fit through the opening, which then needs to be about shoulder-wide. For further information, see the standards issued by ISO, NASA, US Military, and various design handbooks. 51

Figure 9.13. Minimal dimensions (in cm) for passageways and hallways.

Source: Adapted from Van Cott and Kinkade (1972).

Figure 9.14. Minimum height and depth dimensions for "tight" work spaces.

Source: Adapted from MIL-STD 759.

Table 9.2. Dimensions (in cm) for "Tight" workspaces

Height H a Depth D a
Minimal Preferred Arctic Clothing Minimal Preferred Arctic Clothing
Stooped or squatting 66 130 61 90
Kneeling 140 150 106 122 127
Crawling 79 91 97 150 176
Prone 43 51 61 285
Supine 51 61 66 186 191 198
a
Dimensions in cm.

Source: Adapted from MIL-STD 759.

Figure 9.15. Minimal openings for escape hatches.

Source: Adapted from Van Cott and Kinkade (1963).

Figure 9.16. Access openings for enclosures.

Source: Adapted from MIL-HDBK 759.

Figure 9.17. Minimal opening sizes (in cm) to allow one hand holding a tool to pass.

Source: Adapted from MIL-HDBK 759.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128132968000098

Temperature Measurements at the Nanoscale

Miroslav Dramićanin , in Luminescence Thermometry, 2018

Abstract

The established definition of temperature is questionable at the nanoscale since assumptions of the continuum and thermodynamic equilibrium may not hold. Therefore, it is of fundamental interest to determine the minimal dimension of an object for which a local temperature exists. For this purpose, temperature measurements with nanoscale spatial resolution are important. However, they are also of considerable interest for many existing and emerging nanotechnologies, in which performance of structures are strongly determined by temperature, such as, for example, nanoelectronics and integrated photonics. This chapter outlines the methods for temperature measurements on the nanoscale, and the use of luminescence for temperature mapping of microfluidic, nanofluidic, and nanoelectronic devices.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780081020296000117

H2 and H∞ Optimization

Alexander S. Poznyak , in Advanced Mathematical Tools for Automatic Control Engineers: Deterministic Techniques, Volume 1, 2008

23.1.2 Minimal and balanced realizations

Criteria for the minimality of transfer matrix realizations

Definition 23.1

A state space realization [ A B C D ] of the transfer matrix function G(s) is said to be a minimal realization of G(s) if the matrix A has the smallest possible dimension. Sometimes, this minimal dimension of A is called the McMillan degree of G(s).

Lemma 23.1. (The criterion of minimality of a realization)

A state space realization [ A B C D ] of the transfer matrix function G(s) is minimal if and only if the pair (A, B) is controllable and the pair (C, A) is observable.

Proof
1.

Necessity. First, show that if [ A B C D ] is minimal then the pair (A, B) is controllable and the pair (C, A) is observable. On the contrary, supposing that (A, B) is uncontrollable and/or (C, A) is unobservable, by Theorem 23.3 there exists another realization with a smaller McMillan degree that contradicts the minimality of the considered realization. This fact proves necessity.

2.

Sufficiency. Let now the pair (A, B) be controllable and the pair (C, A) be observable. Suppose that the given realization is not minimal and there exists another realization [ A ˜ B ˜ C ˜ D ] which is minimal with order n min < n. Since by Theorem 23.3

G ( s ) = C ( s I A ) 1 B + D = C ˜ ( s I A ˜ ) 1 B ˜ + D

for any i = 0, 1, … one has C A i B = C ˜ A ˜ i B ˜ which implies

(23.9) O C = O ˜ C ˜

By the controllability and observability assumptions

rank ( O ) = rank ( O ) = n

and, hence, by the Sylvester inequality (2.24) we also have that rank (OC) = n. By the same reasons,

rank ( O ˜ ) = rank ( C ˜ ) = k = rank ( O ˜ C ˜ )

which contradicts the identity rank rank ( O C ) = rank ( O ˜ C ˜ ) resulting from (23.9). Sufficiency is proven.

Corollary 23.2

If [ A i B i C i D i ] (i = 1, 2) are two minimal realizations with the controllability Ci and observability Oi matrices respectively, then there exists the unique nonsingular coordinate transformation

(23.10) x ( 2 ) = T x ( 1 ) T = ( O 2 T O 2 ) 1 O 2 T O 1 o r T 1 = C 1 C 2 T ( C 2 C 2 T ) 1

such that in the compact forms presentation (23.2) the corresponding matrices are related as

(23.11) A 2 = T A 1 T 1 , B 2 = T B 1 , C 2 = C 1 T 1

Proof

It directly follows from (23.9) and (23.5).

Balanced realization for a transfer matrix

In spite of the fact that there are infinitely many different state space realizations for a given transfer matrix, some particular realizations turn out to be very useful for control engineering practice. First, let us prove the following lemma on the relation of the structure of a state space realization with the solutions of the corresponding matrix Riccati equations.

Lemma 23.2

Let [ A B C D ] be a state space realization of a (not necessarily stable) transfer matrix G (s). Suppose that there exists symmetric matrices

(23.12) P = [ P 1 0 0 0 ] a n d Q = [ Q 1 0 0 0 ]

with P 1, Q 1 nonsingular, that is, P 1 > 0 and Q 1 > 0, such that

(23.13) A P + P A T + B B T = 0 A Q + Q A T + C T C = 0

(in fact, P and Q are the controllability (9.54) and observability (9.62) grammians, respectively).

1.

If the partition of the state space realization, compatible with P, is [ A 11 A 12 B 1 A 21 A 22 B 2 C 1 C 2 D ] , then [ A 11 B 1 C 1 D ] is also the realization of G (s), and, moreover, the pair (A 11, B 1) is controllable, A 11 is stable and P 1 > 0 satisfies the following matrix Lyapunov equation

(23.14) A 11 P 1 + P 1 A 11 T + B 1 B 1 T = 0

2.

If the partition of the state space realization, compatible with Q, is [ A 11 A 12 B 1 A 21 A 22 B 2 C 1 C 2 D ] , then [ A 11 B 1 C 1 D ] is also the realization of G (s), and, moreover, the pair (C 1, A 11,) is observable, A 11 is stable and Q 1 > 0 satisfies the following matrix Lyapunov equation

(23.15) A 11 T Q 1 + Q 1 A 11 + C 1 T C 1 = 0

Proof
1.

Substituting (23.12) into (23.13) implies

0 = A P + P A T + B B T = [ A 11 P 1 + P 1 A 11 T + B 1 B 1 T P 1 A 21 T + B 1 B 2 T A 21 P 1 + B 2 B 1 T B 2 B 2 T ]

which, since P 1 is nonsingular, gives B 2 = 0 and A 21 = 0. Hence,

[ A 11 A 12 B 1 A 21 A 22 B 2 C 1 C 2 D ] = [ A 11 A 12 B 1 0 A 22 0 C 1 C 2 D ]

and, by Lemma 2.2, one has

G ( s ) = C ( s I A ) 1 B + D = [ C 1 C 2 ] [ ( s I A 11 ) 1 0 ( s I A ) 1 A 12 ( s I A 22 ) 1 ( s I A 22 ) 1 ] [ B 1 0 ] = [ C 1 C 2 ] [ ( s I A 11 0 ) 1 B 1 ] = C 1 ( s I A 11 ) 1 B 1

and, hence, [ A 11 B 1 C 1 D ] is also a realization. From Lemma 9.1, it follows that the pair (A 11, B 1) is controllable and A 11 is stable if and only if P 1 > 0.

2.

The second part of the theorem results from duality and can be proven following the analogous procedure.

Definition 23.2

A minimal [ A B C D ] state space realization of a transfer matrix G (s) is said to be balanced, if two grammians P and Q are equal, that is,

Proposition 23.3. (The construction of a balanced realization

Let [ A B C D ] be a minimal realization of G(s). Then the following procedure leads to a balanced realization:

1.

Using (23.13), compute the controllability P > 0 and the observability grammians Q > 0.

2.

Using the Cholesky factorization (4.31), find matrix R such that

P = R T R

3.

Diagonalize RQR T getting

R Q R T = U Σ 2 U T

4.

Let T = R T UΣ−1/2 and obtain new Pbal and Qbal as

(23.17) P b a l : = T P T T = ( T T ) 1 Q T 1 : = Q b a l = Σ

Proof

The validity of this construction follows from Theorem (7.4) if A = P and B = Q. Taking into account that for minimal realization A > 0 and B > 0, we get (23.17).

Corollary 23.3

(23.18) P b a l Q b a l = 2 = diag( σ 1 2 , , σ n 2 )

where, with the decreasing order number, σ 1 σ n are called the Hankel singular values of a time invariant linear system with transfer matrix G (s).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080446745500262

Structural Analysis for the Sensor Location Problem in Fault Detection and Isolation

Christian Commault , ... Sameh Yacoub Agha , in Fault Detection, Supervision and Safety of Technical Processes 2006, 2007

4 STRUCTURAL ANALYSIS VIA IRREDUCIBLE INPUT SEPARATORS

Consider the graph G(∑Λ)   =   (Z, W) of a structured system of type (7) with vertex set Z and edge set W.

Definition 5

(van der Woude, 2000) A separator S is a set of vertices such that any fault-output path has at least one vertex in S. The dimension of a separator is the number of elements in S.

Definition 6

A separator of dimension d is said to be irreducible if it does not contain a separator of dimension d′ < d.

Definition 7

An irreducible separator S is an irreducible input separator (IIS) of dimension d if for any irreducible separator S′ of dimension d′, such that any direct path from inputs to S contains a vertex in S′, we have d′ > d.

This means that a separator is an IIS if there is no irreducible separator of lower or equal dimension between inputs and this separator.

Notice that these IIS include the set of separators defined in (Commault et al., 2005). This will be illustrated on the example.

Among all the irreducible input separators, the separator which has the minimal dimension can be proved to be unique ( van der Woude, 2000). It is called the minimal input separator and denoted S*.

S* can be obtained using standard maximum flow algorithms as the Ford and Fulkerson algorithm (T.C.Hu, 1982). The dimension of S* is equal to the maximal size of a fault-output linking,.

S* is indeed the first bottleneck between faults and outputs. S* may contain fault, state and output vertices.

We will give now an important property of the IIS.

Theorem 8

Consider the structured system ∑Λ and its associated graph G(∑Λ

A separator S of dimension d is an IIS if and only if:

There exists a F-S linking of size d in G(∑Λ).

For any separator S′ such that any direct path from F to S contains a vertex in S′, the maximal size of a FS′ linking in G(∑Λ) is of dimension d′ > d.

Proof

We give only a sketchy proof.

Assume that the F-S′ linking is of size d′ ≤ d, then S′ will be a separator of a lower dimension than 5 and located between F and S, contradicting the fact that S is an IIS.

Example 2

Consider the structured system ∑Λ whose associated graph is depicted in Figure 2

Fig. 2. Graph G(∑Λ) of Example 2

From this graph we can remark that:

-

The set {x 3, y 1, y 2, y 3} is a separator of dimension 4. but it is not an irreducible separator because it contains a separator {y 1, y 2, y 3} of dimension 3.

-

{x 3, x 4},{x 3, x 2},{x 1, x 4},{x 1, x 2}, are all irreducible separators of dimension 2, and among them only {x 1,x 2 } is an irreducible input separator of minimal dimension and it is the minimal input separator S* of the system.

-

S 3 1 = { f 1 , f 2 , x 2 } , S 3 2 = { f 3 , f 4 , x 1 } are two irreducible input separators of dimension 3.

In (Commault et al., 2005) we considered only some specific disjoint separators i.e {x 1, x 2},{f 1, f 2, f 3, f 4}.

We will show now that the set of IIS can be endowed with a lattice structure, which is a partially ordered set in which all nonempty finite subsets have both a supremum and an infimum (Blyth and Janowitz., 1972).

Definition 9

Consider the structured system ∑Λ with its associated graph G(∑Λ). Consider an IIS S of G(∑Λ).

Define Ts as the set of all vertices in any direct path from F to S in G(∑Λ) except for the vertices of S.

Consider the following partial order relation:

SS′ if T sTs′ . We have the following:

Proposition 10

Consider the structured system ∑Λ and its associated graph G(∑Λ). The set of IIS with the above defined partial order has a lattice structure. Furthermore

The infimum of this lattice is the minimal input separator S*.

The supremum of this lattice is the set of fault vertices F.

Remark 11

On any S*-F path in the lattice oriented graph, the order is total.

In Example 2 the set of four IIS has a lattice structure as illustrated in Figure 3

Fig. 3. The lattice structure corresponding to the set of IIS of Example 2

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080444857501512

A refined resolvent decomposition of a regular polynomial matrix and application to the solution of regular PMDs

Liansheng Tan , in A Generalized Framework of Linear Multivariable Control, 2017

9.5 Conclusions

So far there have been two special resolvent decompositions proposed in the literature through which the solution of a PMD may be expressed. These are based on two different interpretations of the notion of infinite Jordan pair, the first being due to Gohberg et al. [17], and the second due to Vardulakis [13]. The resolvent decomposition proposed by Gohberg et al. [17] uses a certain redundant system structure that results in overly large dimensions of the infinite Jordan pair, though it is relatively simple to calculate the infinite Jordan pair. On the other hand, the approach proposed by Vardulakis [13 ] uses only the relevant system structure, without using redundant information, and the resulting infinite Jordan pair is of minimal dimensions. It is, however, relatively more difficult to compute the required special realizations.

In this chapter, it is established that the redundant information contained in the infinite Jordan pair defined by Gohberg et al. [17] can be deleted through a certain transformation. Based on this, a natural connection between the infinite Jordan pairs defined by Gohberg et al. [17] and that of Vardulakis [13] has been exploited. This facilitates a refinement of the resolvent decomposition. This resulting resolvent decomposition more precisely reflects the relevant system structure and thereby inherits the advantages of both the decompositions of Gohberg et al. [17] and Vardulakis [13].

In the proposed approach the matrices Z , Z in Eq. (9.3) are formulated explicitly, which means this method is constructive. The main idea in this proposed approach is to calculate an elementary matrix P, which is very easy to obtain, to delete the redundant information, then to propose the refined resolvent decomposition. This elementary matrix has the effect of deleting the redundant information in two ways. First, it deletes the redundant information in those blocks in the infinite Jordan pair of Gohberg et al. [17] that correspond to the infinite zeros and bring them into the correct sizes. Second, it deletes the whole blocks in the infinite Jordan pair of Gohberg et al. [17] that correspond to the infinite poles and the whole blocks that are not dynamically important. This elementary matrix serves to transform the partitioned block matrix in Z that corresponds to the redundant information into zero, the resulting refined resolvent decomposition is thus of minimal dimensions. Further, by using this elementary matrix the mechanism of decoupling in the solution of Gohberg et al. [17] is explained clearly. This refined resolvent decomposition facilitates computation of the inverse matrix of A(s) due to the fact that the dimensions of the matrices used are minimal. Once the refined resolvent decomposition is obtained, the generalized infinite Jordan pair and the elementary matrix P are no longer needed in the calculation of the solution of the regular PMD. This therefore presents another merit to this method, which is algorithmically attractive when applied in actual computation.

Based on this refined resolvent decomposition, the complete solution of regular PMDs has then been investigated. This solution presents the zero input response and the zero state response precisely and takes into account the impulsive properties of the system. An algorithm, which has already been implemented in the symbolic computational package Maple, for the investigation of this refined resolvent decomposition is provided.

Compared with the complete solution of regular PMDs given in Chapter 8, where the solution is proposed based on any resolvent decomposition, the resolvent decomposition obtained in this chapter is minimal, the built solution is thus specific to this refined resolvent decomposition.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780081019467000093

The influence of R. E. Kalman—state space theory, realization, and sampled-data systems

Yutaka Yamamoto , in Annual Reviews in Control, 2019

4 Realization theory

Realization theory constitutes one of the central themes of Kalman. For him, it was not merely a procedure of finding a triple of matrices (F, G, H) from an external description, say, transfer function.

Let us quote from his commemorative lecture (Kalman, 1985) on the occasion of Kyoto Prize:

Data that are exact and complete are realized (that is, explained) by precisely one minimal model.

Clearly, he regarded the realization problem to be fundamental to any scientific modeling.

I was and will remain fascinated by the realization problem. It served to educate me about how classical science evolved...(ditto, (Kalman, 1985)).

According to him, complete and exact data must be realized, i.e., have a model, and there must exist only one minimal model. He must have thought that this problem and principle should apply to every context of modeling, not merely restricted to linear finite-dimensional systems.

Let us see what it is in the context of finite-dimensional linear systems. From here on, we confine ourselves to the time-invariant case, i.e., the matrices F(t), G(t), H(t) in (1) and (2) are constant in time t.

The realization problem asks the following question:

Given an external behavior of a system, e.g., a transfer function, what is a corresponding system that gives rise to that external behavior?

The following questions immediately arise:

Does there exist such a realization at all?

If there exists one, is it unique?

If there exists one whose transfer function agrees with the given one, this is called a realization of the given transfer function. Let A(z) be any transfer function expanded in terms of power series in z 1 as

(3) A ( z ) = k = 1 a k z k .

For the sake of brevity, let us assume that all ak are scalars. Then one has the following result:

Theorem 1

The transfer function (3) admits a finite-dimensional realization if and only if its associated (infinite) Hankel matrix (4) is of finite rank. If rank H = N , then there exists a realization of state dimension N, and this N is minimal among all realizations. The finite rank condition of H is equivalent to the transfer function (3) being strictly proper rational, i.e., A(z) can be written in the form p(z)/q(z) with polynomial p, q and deg p < deg q .

(4) H : = [ a 1 a 2 a n a 2 a 3 a n + 1 a 3 a 4 a n + 2 a n a n + 1 a 2 n 1 ] .

The uniqueness question is more intricate because there are clearly infinitely many realizations for a given transfer function. Kalman recognized that this nonuniqueness is superficial due to the following two reasons (Kalman, Falb, & Arbib, 1969):

There are redundant realizations. That is, there exist realizations in which the state has no connection to the input or the output. Such a redundancy can be judged either by the lack of reachability (controllability) or observability. A realization that is both reachable and observable is called canonical , and it has minimal dimension among all realizations.

The choice of a basis in the state space of a canonical realization induces freedom; this leads to nonuniqueness in the matrix representation of (F, G, H), however, only up to basis change. Hence all canonical (i.e., minimal) realizations are mutually isomorphic modulo the choice of a basis in the state space.

In other words, for a given transfer function, there exists an essentially unique canonical realization up to isomorphism.

Kalman apparently regarded this uniqueness of a canonical realization to be the leading principle in any scientific modeling, be it physics or system theory, or else. No matter what it might be, if the model were intrinsically nonunique for exact and complete data, then there would be no objective scientific ground for asserting the correctness of the results derived out of such a model.

4.1 Kalman's k[z]-module approach

The above realization problem can be cast into a more abstract framework. This is Kalman's k[z]-module approach (k is any field where the system description takes place). Following this approach, realization theory had more technical and conceptual impacts on subsequent developments.

Let us first try to explain it intuitively. The crux of constructing a realization is in finding and constructing a state space from a given input/output data. Based on the natural causality requirement, the state should be an object that separates the future from the past, and acts as a memory device to store the past history of inputs that are necessary to produce future outputs.

In view of the above, let us now describe the k[z]-module framework that Kalman introduced. Let Ω : = R [ z ] and Γ : = z 1 R [ [ z 1 ] ] . The latter is the set of all formal power series with no constant terms. Ω represents the set of all past inputs and Γ is the set of all future outputs, where the time t is represented by z t . Then the input/output map associated with the transfer function (3) is defined as

(5) f ( ω ) = π ( A ( z ) ω )

where π is the truncation that cuts all polynomial elements to zero. The state transition is induced by the time shift by one step, and is represented by the multiplication by z in this case. Both Ω and Γ possess natural z-multiplications which turn them into R [ z ] -module. The crux of the realization in this context is to find an R [ z ] -module X that factors f into f = h g with module homomorphisms g and h as shown in the commutative diagram below.

(6)

Then the multiplication by z in X represents the free state transition F, and g and h naturally give rise to G and H operators; see Kalman et al. (1969) for details.

The simplest examples of such X are Ω or Γ themselves. It amounts to store "all past inputs" or "all future outputs." Needless to say, these realizations are highly redundant.

Actually, one way of reducing such a redundancy is to take the Nerode equivalence class Ω / ker f . Yet another is to take the image imf of the input/output map f itself. A beautiful theory which is both algebraic and also singles out the essential feature of realization theory has been obtained (Kalman et al., 1969).

4.2 Extensions

This framework has had much impact on realization theory in other contexts. For example, it naturally led to the matrix factorization approach over R [ z ] ; see for example, Fuhrmann (1976), which later had further impacts on systems over rings and infinite-dimensional systems (e.g., Khargonekar & Sontag, 1982; Yamamoto, 1988). While the above framework does not directly translate to an infinite-dimensional context, it gives rise to a natural analogue.

There are many possible choices to generalize Ω and Γ. If we keep locally L 2 type spaces, the inductive and projective limits lim L 2 [ n , 0 ] and lim ← L 2[0, n] for Ω and Γ, respectively, are natural choices (Yamamoto, 1981). Or assuming stability, L 2 ( , 0 ] and L 2[0, ∞) can be taken instead with strong connection to H 2 space formalities via the Laplace transform (Baras & Brockett, 1975; Baras, Brockett, & Fuhrmann, 1974); see also Helton (1976) for a related approach. Kalman and Hautus (1972) also used spaces of distributions with similar nature as R [ z ] and z 1 R [ z 1 ] .

Like in Fuhrmann (1976), one can invoke following fractional factorization to obtain a compact representation for an observable realization (Yamamoto, 1988; 1989). Let A be an impulse response matrix as in (3) but in continuous-time:

(7) A = Q 1 * P

where Q and P are matrices of distributions of compact support with suitable sizes. This factorization leads to a natural topologically observable (i.e., initial state determination being well posed) realization with state space

(8) X Q : = { x L l o c 2 [ 0 , ) | Q * x | ( 0 , ) = 0 } .

This is a left shift invariant subspace of L l o c 2 [ 0 , ) , and the infinitesimal generator F of the left shift semigroup in this space can serve as the generator of the state transition. Similarly for G and H operators, and one obtains a natural realization from these.

More details can be found in Yamamoto (1981) and Antoulas, Matsuo, and Yamamoto (1991); the latter also gives further topics in realization theory for finite-dimensional systems.

4.3 Other contexts

For nonlinear systems with polynomial type input/output correspondence, algebraic varieties are a natural platform for dealing with such objects. Sontag (1979) gave the uniqueness of canonical realization by introducing the notion of algebraic observability. What is common in Sontag's and Yamamoto's work is the recognition that realization theory is most naturally placed in the framework of mathematical categories (e.g., algebraic or topological). This is clearly in line with Kalman's fundamental insight that modeling should be placed in a proper and unified framework.

Kalman also attempted to develop a theory for the situation where data are not exact (Kalman, 1994). There, he examined such notions of randomness, probability, and modeling in econometrics (Kalman, 1994). How successful it was may be left for a future evaluation.

There is one missing issue in the realization framework: We usually start with a prespecified choice of inputs and outputs. In reality, we often encounter the situation in which a lot of raw signal data are given with no a priori distinction between inputs and outputs. J. C. Willems developed behavioral system theory where one does not assume a priori input-output correspondence. See, e.g., Willems (2007) and references therein.

Read full article

URL:

https://www.sciencedirect.com/science/article/pii/S136757881930032X