Saturday 4 February 2017

Exponential Moving Average Cut Off Frequenz

Modellierung und Simulation Simulation: Der kürzeste Weg zu Anwendungen Diese Seite bietet Informationen über die Modellierung und Simulation von diskreten Ereignissystemen. Es umfasst Diskussionen über die beschreibende Simulationsmodellierung, Programmierbefehle, Techniken zur Empfindlichkeitsschätzung, Optimierung und Zielsuche durch Simulation und What-If-Analyse. Fortschritte in der Rechenleistung, Verfügbarkeit der PC-basierten Modellierung und Simulation sowie eine effiziente Methodenanalyse erlauben die Entwicklung von präskriptiven Simulationsmodellen wie Optimierung, um Untersuchungen in der Systemanalyse, dem Entwurf und den Steuerungsprozessen zu verfolgen, die bisher den Modellen nicht zugänglich waren Und Entscheidungsträger. Um die Website zu durchsuchen. Versuchen Sie es in Seite Ctrl f. Geben Sie ein Wort oder eine Wortgruppe in das Dialogfeld ein, z. B. Optimierungsquot oder Empfindlichkeitsquot Soll das erste Aussehen der Wortsprache nicht das ist, was Sie suchen, versuchen Sie F ind Weiter. Statistik und Wahrscheinlichkeit für Simulationsthemen in der Beschreibung Simulationsgestützte Optimierungstechniken Metamodellierung und Zielsuchprobleme Was-wäre-wenn-Analysetechniken Einführung Zusammenfassung Computer-Systembenutzer, - administratoren und - designer haben in der Regel ein Ziel höchster Leistung zu niedrigsten Kosten . Modellierung und Simulation von System-Design-Trade-off ist eine gute Vorbereitung für Design-und Engineering-Entscheidungen in reale Arbeitsplätze. Auf dieser Website studieren wir Computersysteme Modellierung und Simulation. Wir brauchen ein korrektes Wissen sowohl der Techniken der Simulationsmodellierung als auch der simulierten Systeme selbst. Das oben beschriebene Szenario ist nur eine Situation, in der die Computersimulation effektiv genutzt werden kann. Neben der Verwendung als Werkzeug zur besseren Verständlichkeit und Optimierung der Leistung und der Zuverlässigkeit von Systemen wird die Simulation auch umfangreich genutzt, um die Korrektheit der Konstruktionen zu überprüfen. Die meisten, wenn nicht alle heute hergestellten digitalen integrierten Schaltkreise werden zuerst umfassend simuliert, bevor sie hergestellt werden, um Konstruktionsfehler zu identifizieren und zu korrigieren. Simulation früh im Entwurfszyklus ist wichtig, weil die Kosten für die Reparatur Fehler drastisch steigt die später im Produktlebenszyklus, dass der Fehler erkannt wird. Eine weitere wichtige Anwendung der Simulation ist die Entwicklung virtueller Umgebungen. z. B. für das Training. Analog zum Holodeck im populären Science-Fiction-Fernsehprogramm Star Trek erzeugen Simulationen dynamische Umgebungen, mit denen Benutzer interagieren können, als wären sie wirklich da. Solche Simulationen werden heute weitgehend eingesetzt, um militärisches Personal für Schlachtfeldsituationen zu trainieren, zu einem Bruchteil der Kosten der laufenden Übungen mit realen Tanks, Flugzeugen usw. Dynamische Modellierung in Organisationen ist die kollektive Fähigkeit, die Implikationen der Veränderung im Laufe der Zeit zu verstehen. Diese Fähigkeit liegt im Herzen eines erfolgreichen strategischen Entscheidungsprozesses. Die Verfügbarkeit von effektiver visueller Modellierung und Simulation ermöglicht es dem Analytiker und dem Entscheidungsträger, ihre dynamische Entscheidung zu beschleunigen, indem sie die Strategie zur Vermeidung versteckter Fallstricke vorantreiben. Systemsimulation ist die Nachahmung des Betriebs eines realen Systems wie die tägliche Betriebsweise einer Bank oder der Wert eines Aktienportfolios über einen Zeitraum oder das Führen eines Fließbandes in einer Fabrik oder Die Belegschaft eines Krankenhauses oder eine Sicherheitsfirma, in einem Computer. Statt umfangreicher mathematischer Modelle von Experten zu bauen, ermöglicht es die leicht verfügbare Simulationssoftware, den Betrieb eines realen Systems von Nicht-Experten, die Manager, aber keine Programmierer sind, zu modellieren und zu analysieren. Eine Simulation ist die Ausführung eines Modells, dargestellt durch ein Computerprogramm, das Informationen über das zu untersuchende System liefert. Der Simulationsansatz zur Analyse eines Modells steht im Gegensatz zum analytischen Ansatz, bei dem die Methode der Systemanalyse rein theoretisch ist. Da dieser Ansatz zuverlässiger ist, bietet der Simulationsansatz mehr Flexibilität und Komfort. Die Aktivitäten des Modells bestehen aus Ereignissen, die zu bestimmten Zeitpunkten aktiviert werden und so den Gesamtzustand des Systems beeinflussen. Die Zeitpunkte, zu denen ein Ereignis aktiviert wird, werden randomisiert, so dass keine Eingabe von außerhalb des Systems erforderlich ist. Ereignisse existieren autonom und sind diskret, so dass zwischen der Ausführung von zwei Ereignissen nichts geschieht. Das SIMSCRIPT bietet einen prozessbasierten Ansatz zum Schreiben eines Simulationsprogramms. Bei diesem Ansatz bestehen die Komponenten des Programms aus Entitäten, die mehrere verwandte Ereignisse zu einem Prozess zusammenfassen. Auf dem Gebiet der Simulation hat der Begriff des Prinzips der Computationalen Äquivalenz positive Auswirkungen für den Entscheidungsträger. Simulierte Experimente beschleunigen und ersetzen effektiv die Wartezeit und sehen Ängste darin, neue Einsichten und Erklärungen für das zukünftige Verhalten des realen Systems zu entdecken. Betrachten Sie das folgende Szenario. Sie sind der Entwickler eines neuen Switches für Asynchronous Transfer Mode (ATM), eine neue Switching-Technologie, die in den letzten Jahren auf dem Markt erschienen ist. Um sicherzustellen, dass der Erfolg Ihres Produktes in diesem Bereich ein hart umkämpftes Feld ist, ist es wichtig, dass Sie den Schalter so konstruieren, dass er die höchstmögliche Leistung erzielt, während er angemessene Herstellungskosten behält. Wie viel Speicher sollte in den Switch eingebaut werden Sollte der Speicher mit eingehenden Kommunikationsverbindungen verknüpft werden, um Meldungen bei ihrer Ankunft zu puffern, oder sollte er mit ausgehenden Verbindungen verknüpft werden, um Nachrichten zu halten, die konkurrieren, um denselben Link zu verwenden. Außerdem ist was die beste Organisation von Hardware-Komponenten innerhalb des Switches Dies sind nur einige der Fragen, die Sie beantworten müssen, um mit einem Design. Mit der Integration von künstlicher Intelligenz, Agenten und anderen Modellierungstechniken ist die Simulation eine effektive und geeignete Entscheidungshilfe für die Führungskräfte geworden. Durch die Kombination der aufkommenden Komplexität mit neu entwickelter Simulationstechnologie baut die PricewaterhouseCoopers, Emergent Solutions Group, eine Software auf, die es Senior Management ermöglicht, sicher zu spielen, was passiert, wenn Szenarien in künstlichen Welten. Zum Beispiel kann in einem Consumer-Retail-Umfeld verwendet werden, um herauszufinden, wie die Rollen von Verbrauchern und Mitarbeitern simuliert werden können, um Spitzenleistung zu erzielen. Statistiken für Correlated Data Wir beschäftigen uns mit n realisierungen, die mit der Zeit zusammenhängen, dh mit n korrelierten Beobachtungen die Schätzung des Mittelwerts durch das Mittel S X i n gegeben ist, wobei die Summe über i 1 bis n liegt. Wobei die Summe j 1 bis m ist, dann ist die geschätzte Varianz: 1 43 2A S 2 n wobei S 2 die übliche Varianzschätzung rj, x der j-te Koeffizient der Autokorrelation m die maximale Zeitverzögerung ist, für die Autokorrelationen berechnet werden, so dass J 1, 2, 3. m Als eine gute Faustregel sollte die maximale Verzögerung, für die Autokorrelationen berechnet werden, etwa 2 der Anzahl von n Realisierungen sein, obwohl jedes rj, x geprüft werden kann, um festzustellen, ob es signifikant verschieden ist Null. Probengrößenbestimmung: Wir können die minimale Probengröße berechnen, die für n 1 43 2A S 2 t 2 erforderlich ist (d 2 mittlere 2) Anwendung: Ein Pilotlauf wurde mit einem Modell durchgeführt, die Messungen betrugen 150, der Mittelwert 205,74 Minuten und die Varianz S 2 101, 921,54 wurden die Schätzwerte der Nachlaufkoeffizienten berechnet als: r 1, x 0,301 r 2, x 0,2993 und r 3, x 0,1987. Berechnen Sie die minimale Stichprobengröße, um sicherzustellen, dass die Schätzung innerhalb von 43 d 10 des wahren Mittelwertes mit 0,05 liegt. N (1,96) 2 (101,921,54) 1 43 2 (1-14) 0,3301 43 (1 - 24) 0,2993 43 (1 - 34) 0,1987 (0,1) 2 (205.74) 2 Was ist ein zentrales Limit-Theorem Für praktische Zwecke ist die Hauptsache Idee des zentralen Grenzwertsatzes (CLT) ist, dass der Durchschnitt einer Stichprobe von Beobachtungen, die aus einer Population mit irgendeiner Formverteilung gezogen werden, annähernd als Normalverteilung verteilt wird, wenn bestimmte Bedingungen erfüllt sind. In der theoretischen Statistik gibt es verschiedene Versionen des zentralen Grenzwertsatzes, je nachdem, wie diese Bedingungen festgelegt sind. Dabei geht es um die Arten der Annahmen, die über die Verteilung der Elternpopulation (Population, aus der die Stichprobe entnommen wird) und das eigentliche Stichprobenverfahren. Eine der einfachsten Versionen des Theorems besagt, dass if eine Stichprobe der Größe n (dh n größer als 30) aus einer endlichen Standardabweichung einer unendlichen Population ist. Dann konvergiert das genormte Stichprobenmittel zu einer Normalnormalverteilung, oder gleichermaßen nähert sich das Stichprobenmittel einer Normalverteilung mit Mittelwert gleich dem Populationsmittelwert und der Standardabweichung gleich der Standardabweichung der Population dividiert durch die Quadratwurzel der Stichprobengröße n. Bei der Anwendung des zentralen Grenzwertsatzes auf praktische Probleme in der statistischen Schlussfolgerung interessieren sich die Statistiker jedoch mehr dafür, wie nahe die Näherungsverteilung des Stichprobenmittels einer Normalverteilung für endliche Stichprobengrößen als der Grenzverteilung selbst folgt. Eine hinreichend genaue Übereinstimmung mit einer Normalverteilung ermöglicht es den Statistikerinnen und Statistiker, die normale Theorie zu verwenden, um Rückschlüsse auf Bevölkerungsparameter (wie den Mittelwert) unter Verwendung des Stichprobenmittels zu erhalten, ungeachtet der tatsächlichen Form der Elternpopulation. Es ist bekannt, daß die standardisierte Variable unabhängig von der Elternpopulation eine Verteilung mit einem Mittelwert 0 und einer Standardabweichung 1 unter Zufallsstichprobe aufweist. Wenn außerdem die Elternpopulation normal ist, dann wird sie exakt als Standard-Normalvariable für jede positive ganze Zahl n verteilt. Der zentrale Grenzwertsatz zeigt das bemerkenswerte Ergebnis, dass die standardisierte Variable auch dann, wenn die Elternpopulation nicht normal ist, annähernd normal ist, wenn die Stichprobengröße groß genug ist (z. B. gt 30). Im allgemeinen ist es nicht möglich, Bedingungen zu definieren, unter denen die Approximation des zentralen Grenzwertsatzes funktioniert und welche Stichprobengrößen erforderlich sind, bevor die Approximation gut genug wird. Als allgemeine Richtlinie haben Statistiker das Rezept verwendet, dass, wenn die Elternverteilung symmetrisch und relativ kurzschwanzig ist, die Stichprobenmittelebene annähernd Normalität für kleinere Proben erreicht, als wenn die Muttergesellschaft schief oder langgestreckt ist. In dieser Lektion werden wir untersuchen, das Verhalten des Mittels der Proben von verschiedenen Größen aus einer Vielzahl von Elternpopulationen gezogen. Bei der Untersuchung von Stichprobenverteilungen von Probenmitteln, die aus Proben von verschiedenen Größen berechnet wurden, die aus einer Vielzahl von Verteilungen gewonnen wurden, können wir einen Einblick in das Verhalten des Probenmittels unter diesen spezifischen Bedingungen erhalten sowie die Gültigkeit der oben erwähnten Richtlinien für die Verwendung der Zentralen Grenzwertsatz in der Praxis. Unter bestimmten Bedingungen kann bei großen Proben die Probenahmeverteilung des Probenmittels durch eine Normalverteilung angenähert werden. Die für die Approximation benötigte Stichprobengröße hängt stark von der Form der Elternverteilung ab. Symmetrie (oder deren Fehlen) ist besonders wichtig. Für eine symmetrische Elternverteilung kann, selbst wenn sie sehr verschieden von der Form einer Normalverteilung ist, eine adäquate Annäherung mit kleinen Proben (z. B. 10 oder 12 für die gleichmäßige Verteilung) erhalten werden. Bei symmetrischen Kurzschnitt-Elternverteilungen erreicht das Stichprobenmittel bei kleineren Stichproben eine annähernde Normalität, als wenn die Elternpopulation schief und langgestreckt ist. In einigen extremen Fällen (z. B. binomischen) Probengrößen, die weit über die typischen Richtlinien hinausgehen (z. B. 30), werden für eine adäquate Annäherung benötigt. Für einige Verteilungen ohne erste und zweite Momente (z. B. Cauchy) gilt das zentrale Grenzwertsystem nicht. Was ist ein Minimum-Squares-Modell Viele Probleme bei der Analyse von Daten beinhalten die Beschreibung, wie Variablen zusammenhängen. Das einfachste aller Modelle, die die Beziehung zwischen zwei Variablen beschreiben, ist ein lineares oder geradliniges Modell. Das einfachste Verfahren, um ein lineares Modell anzupassen, besteht darin, eine Linie durch die Daten auf einer Kurve zu kugeln. Eine elegantere und herkömmlichere Methode ist diejenige der kleinsten Quadrate, die die Linie findet, die die Summe der Abstände zwischen den beobachteten Punkten und der angepassten Linie minimiert. Stellen Sie fest, dass die Montage der besten Linie durch Auge schwierig ist, vor allem, wenn es eine Menge von Rest Variabilität in den Daten. Wissen, dass es eine einfache Verbindung zwischen den numerischen Koeffizienten in der Regressionsgleichung und der Steigung und Abschnitts der Regressionsgeraden gibt. Wissen, dass eine einzige Zusammenfassung Statistik wie ein Korrelationskoeffizient nicht die ganze Geschichte erzählen. Ein Streudiagramm ist eine wesentliche Ergänzung zur Untersuchung der Beziehung zwischen den beiden Variablen. ANOVA: Analyse der Varianz Die bisher erlernten Tests erlauben uns, Hypothesen zu prüfen, die den Unterschied zwischen nur zwei Mitteln untersuchen. Eine Varianzanalyse oder eine ANOVA erlaubt es uns, den Unterschied zwischen zwei oder mehr Mitteln zu testen. ANOVA tut dies, indem sie das Verhältnis der Variabilität zwischen zwei Bedingungen und Variabilität innerhalb jeder Bedingung untersucht. Zum Beispiel, sagen wir, geben wir eine Droge, die wir glauben, wird das Gedächtnis zu einer Gruppe von Menschen zu verbessern und geben Sie ein Placebo für eine andere Gruppe von Menschen. Wir können die Gedächtnisleistung durch die Anzahl der Wörter messen, die von einer Liste zurückgerufen werden, die wir alle bitten, sich zu merken. Ein t-Test würde die Wahrscheinlichkeit vergleichen, dass die Differenz der mittleren Anzahl von Worten, die für jede Gruppe zurückgerufen wurden, beobachtet wird. Ein ANOVA-Test hingegen würde die Variabilität, die wir zwischen den beiden Bedingungen beobachten, mit der Variabilität vergleichen, die in jeder Bedingung beobachtet wird. Wir erinnern daran, dass wir die Variabilität als die Summe der Differenz der einzelnen Punkte aus dem Mittelwert messen. Wenn wir tatsächlich eine ANOVA berechnen, werden wir eine Kurzformel verwenden. Wenn also die Variabilität, die wir vorhersagen (zwischen den beiden Gruppen) viel größer ist als die Variabilität, die wir nicht vorhersagen (innerhalb jeder Gruppe), dann werden wir schließen, dass unsere Behandlungen produzieren Unterschiedliche Ergebnisse. Exponentielle Dichtefunktion Eine wichtige Klasse von Entscheidungsproblemen unter Unsicherheit betrifft die Chance zwischen Ereignissen. Zum Beispiel die Chance, die Zeit bis zum nächsten Zusammenbruch einer Maschine nicht mehr als eine bestimmte Zeit, wie das Kopiergerät in Ihrem Büro nicht zu brechen in dieser Woche. Exponentialverteilung gibt die Verteilung der Zeit zwischen unabhängigen Ereignissen, die mit einer konstanten Rate auftreten. Seine Dichtefunktion ist: wobei l die durchschnittliche Anzahl von Ereignissen pro Zeiteinheit ist, was eine positive Zahl ist. Der Mittelwert und die Varianz der Zufallsvariablen t (Zeit zwischen Ereignissen) betragen 1 l. Und 1 l 2.. Zu den Anwendungen gehören die probabilistische Bewertung der Zeit zwischen der Ankunft der Patienten in der Notaufnahme eines Krankenhauses und der Ankunft der Schiffe zu einem bestimmten Hafen. Bemerkungen: Sonderfall von Weibull - und Gammaverteilungen. Sie können Exponential Applet verwenden, um Ihre Berechnungen durchzuführen. Sie können den folgenden Lilliefors-Test für Exponentially verwenden, um den Goodness-of-fit-Test durchzuführen. Poisson-Prozess Eine wichtige Klasse von Entscheidungsproblemen unter Unsicherheit ist durch die geringe Wahrscheinlichkeit des Auftretens eines bestimmten Ereignisses, wie eines Unfalls, gekennzeichnet. Gibt die Wahrscheinlichkeit exakt x unabhängiger Ereignisse während einer bestimmten Zeitspanne, wenn Ereignisse unabhängig und mit einer konstanten Rate stattfinden. Kann auch die Anzahl der Vorkommen über konstante Bereiche oder Volumina darstellen. Die folgenden Aussagen beschreiben den Poisson-Prozess. Das Auftreten der Ereignisse ist unabhängig. Das Auftreten von Ereignissen aus einem Satz von Annahmen in einem Intervall von Raum oder Zeit hat keine Auswirkung auf die Wahrscheinlichkeit eines zweiten Auftretens des Ereignisses in demselben oder jedem anderen Intervall. Theoretisch muss eine unendliche Anzahl von Vorkommen des Ereignisses im Intervall möglich sein. Die Wahrscheinlichkeit des Eintreffens des Ereignisses in einem gegebenen Intervall ist proportional zur Länge des Intervalls. In jedem unendlich kleinen Abschnitt des Intervalls ist die Wahrscheinlichkeit von mehr als einem Ereignis des Ereignisses vernachlässigbar. Poisson-Verfahren werden oft verwendet, zum Beispiel in der Qualitätskontrolle, Zuverlässigkeit, Versicherungsanspruch, eingehende Anzahl von Telefongesprächen und Warteschlangentheorie. Eine Anwendung: Eine der nützlichsten Anwendungen des Poisson-Prozesses liegt im Bereich der Warteschlangentheorie. In vielen Situationen, in denen Warteschlangen auftreten, wurde gezeigt, dass die Anzahl der Personen, die der Warteschlange in einem gegebenen Zeitraum folgen, dem Poisson-Modell folgt. Wenn zum Beispiel die Ankunftszeit in einer Notaufnahme l pro Zeiteinheit (z. B. 1 Std.) Liegt, dann gilt: P (n Ankünfte) l n e - l n Der Mittelwert und die Varianz der Zufallsvariablen n sind beide l. Wenn jedoch der Mittelwert und die Varianz einer Zufallsvariablen mit gleichen Zahlenwerten, dann ist es nicht notwendig, dass ihre Verteilung eine Poisson ist. P (0 Ankunft) e - l P (1 Ankunft) l e - l 1 P (2 Ankunft) l 2 e - l 2 und so weiter. Allgemein: P (n1 Ankünfte) l Pr (n Ankünfte) n. Sie können Poisson Applet verwenden, um Ihre Berechnungen durchzuführen. Goodness-of-Fit für Poisson Ersetzen Sie die numerischen Beispieldaten durch Ihre bis zu 14 Paare von Observed-Werten, deren Frequenzen. Und klicken Sie dann auf die Schaltfläche Berechnen. Blindboxen sind in den Berechnungen nicht enthalten. Wenn Sie Ihre Daten eingeben, um von Zelle zu Zelle in der Daten-Matrix zu bewegen, verwenden Sie die Tabulatortaste nicht Pfeil oder geben Sie die Tasten ein. Einheitliche Dichtefunktion Anwendung: Gibt Wahrscheinlichkeit, dass Beobachtung innerhalb eines bestimmten Intervalls auftreten wird, wenn die Wahrscheinlichkeit des Auftretens innerhalb dieses Intervalls direkt proportional zur Intervalllänge ist. Beispiel: Zur Erzeugung von Zufallszahlen in der Stichprobenbildung und Monte-Carlo-Simulation. Bemerkungen: Sonderfall der Beta-Verteilung. Die Massenfunktion des geometrischen Mittels von n unabhängigen Uniformen 0,1 ist: P (Xx) nx (n - 1) (Log1xn) (n - 1) (n - 1). Z L U L - (1-U) L L soll tukeys symmetrische l-Verteilung haben. Sie können mit Uniform Applet, um Ihre Berechnungen durchzuführen. Einige nützliche SPSS-Befehle Weitere SPSS-Programme, die für die Analyse der Simulationseingänge nützlich sind, finden Sie unter Datenanalyse-Routinen. Zufallszahlengeneratoren Klassische gleichförmige Zufallszahlengeneratoren haben einige Hauptdefekte, wie kurze Periodenlänge und Mangel an höherer Dimensionsgleichförmigkeit. Allerdings gibt es heutzutage eine Klasse von ziemlich komplexen Generatoren, die so effizient ist wie die klassischen Generatoren, während sie die Eigenschaft einer viel längeren Periode und einer höheren Dimensionsgleichmäßigkeit genießen. Computerprogramme, die Zufallszahlen generieren, verwenden einen Algorithmus. Das heißt, wenn Sie den Algorithmus und die Seedwerte kennen, können Sie vorhersagen, welche Zahlen resultieren werden. Weil Sie die Zahlen vorherzusagen können, sind sie nicht wirklich zufällig - sie sind pseudorandom. Für statistische Zwecke gute Pseudozufallszahlen Generatoren sind gut genug. Der Zufallszahlengenerator RANECU A FORTRAN-Code für einen Generator mit einheitlichen Zufallszahlen auf 0,1. RANECU ist ein multiplikativer linearer Kongruenzgenerator, der für eine 16-Bit-Plattform geeignet ist. Es kombiniert drei einfache Generatoren und hat eine Periode von mehr als 81012. Es ist für eine effizientere Nutzung konstruiert, indem es für eine Sequenz von solchen Zahlen, LEN insgesamt, in einem einzigen Anruf zurückgegeben werden. Es kann ein Satz von drei Nicht-Null-Integer-Samen geliefert werden, bei denen ein Default-Set verwendet wird. Falls vorhanden, sollten diese drei Samen, in der Reihenfolge, in den Bereichen 1,32362, 1,31726 und 1,31656 liegen. Die Shuffling-Routine in Visual Basic Die quadratische Histogrammmethode Wir erhalten ein Histogramm mit vertikalen Balken mit Höhen proportional zur Wahrscheinlichkeit, mit der wir einen Wert erzeugen möchten, der durch das Label an der Basis angegeben wird. Ein einfaches solches Histogramm, flach gelegt, könnte sein: Die Idee ist, die Stäbe in Stücke zu zerschneiden und dann zu einem quadratischen Histogramm zusammenzusetzen, wobei alle Höhen gleich sind, wobei jeder letzte Balken einen unteren Teil sowie einen oberen Teil aufweist, der anzeigt, wo er ist kam aus. Eine einzelne einheitliche Zufallsvariable U kann dann verwendet werden, um einen der letzten Balken zu wählen und anzugeben, ob der untere oder obere Teil verwendet werden soll. Es gibt viele Möglichkeiten, dies zu tun schneiden und reassembling die einfachste scheint der Robin Hood Algorithmus: Nehmen Sie von den reichsten, um die Ärmsten bis zum Durchschnitt zu bringen. SCHRITT 1: Das ursprüngliche (horizontale) Histogramm, durchschnittliche Höhe 20: Nehmen Sie 17 vom Streifen a, um den Streifen e bis zum Mittel zu bringen. Rekordspender und verwenden alte schlechte Ebene, um unteren Teil der getan markieren: Dann bringen d bis zu durchschnittlich mit Donor b. Rekord-Spender und verwenden alte schlechte Ebene, um unteren Teil der getan markieren: Dann bringen Sie ein bis zu durchschnittlich mit Spender c. Rekordspender und verwenden alte schlechte Ebene, um den unteren Teil der getan zu markieren: Schließlich bringen b bis zum Durchschnitt mit Spender c. Aufzeichnen von Spendern und Verwenden eines alten schlechten Pegels, um den unteren Teil des Ergebnisses zu markieren: Wir haben nun ein quadriertes Histogramm, d. h. ein Rechteck mit 4 Streifen gleicher Fläche, wobei jeder Streifen mit zwei Bereichen ist. Eine einheitliche Variation U kann verwendet werden, um a, b, c, d, e mit den erforderlichen Wahrscheinlichkeiten zu erzeugen. 32. 27. 26. 12 .06. Setup: Machen Sie Tabellen, Sei j der Integer-Teil von 15U, mit U Uniform in (0,1). Wenn U lt Tj Vj zurückgeben, sonst VKj zurückgeben. In vielen Anwendungen ist keine V-Tabelle notwendig: Vii und die Erzeugungsprozedur wird If U lt Tj return j, sonst return Kj. Referenzen Weitere Literatur: Aiello W. S. Rajagopalan und R. Venkatesan, Design von praktischen und nachweislich guten Zufallszahlengeneratoren, Journal of Algorithms. 29, 358-389, 1998. Dagpunar J. Principles of Random Variate Generation. Clarendon, 1988. Fishman G. Monte Carlo. Springer, 1996. James, Fortran Version von LEcuyer Generator, Comput. Phys. Comm. . 60, 329-344, 1990. Knuth D. The Art of Computer Programming, Bd. 2. Addison-Wesley, 1998. LEcuyer P. Effiziente und tragbare kombinierte Zufallsgeneratoren, Comm. ACM, 31, 742-749, 774, 1988. LEcuyer P. Uniform-Zufallszahlenerzeugung, Ann. Op. Res. 53, 77-120, 1994. LEcuyer P. Zufallszahlenerzeugung. Im Handbuch zur Simulation. J. Banks (Hrsg.), Wiley, 1998. Maurer U. Ein universeller statistischer Test für Zufallsbitgeneratoren, J. Cryptology. 5, 89-105, 1992. Sobol I. und Y. Levitan, Ein Pseudozufallszahlengenerator für Personalcomputer, Computer Mathematics with Applications. 37 (4), 33 & ndash; 40, 1999. Tsang W-W. Ein Entscheidungsbaumalgorithmus zum Quadrieren des Histogramms in der Zufallszahlenerzeugung, Ars Combinatoria. 23A, 291-301, 1987 Test für Zufälligkeit Wir müssen sowohl auf Zufälligkeit als auch auf Gleichförmigkeit prüfen. Die Tests können in zwei Kategorien eingeteilt werden: Empirische oder statistische Tests und theoretische Tests. Theoretische Untersuchungen behandeln die Eigenschaften des Generators, der verwendet wird, um die Realisierung mit der gewünschten Verteilung zu schaffen, und schauen nicht auf die generierte Zahl. Zum Beispiel würden wir nicht verwenden einen Generator mit schlechten Qualitäten, um Zufallszahlen zu generieren. Statistische Untersuchungen basieren ausschließlich auf den durchgeführten Zufallsbeobachtungen. Test für Zufall: A. Test für Unabhängigkeit: Plot der x i Realisierung vs x i1. Wenn es Unabhängigkeit gibt, zeigt das Diagramm überhaupt keine Unterscheidungsmuster, wird aber vollkommen zerstreut sein. B. Führt Tests durch (Vor - und Nachlauf): Dies ist ein direkter Test der Unabhängigkeitsannahme. Es gibt zwei Teststatistiken zu berücksichtigen: eine basierend auf einer normalen Näherung und eine andere mit numerischen Näherungen. Test auf Normalnäherung: Angenommen, Sie haben N zufällige Realisierungen. Sei a die Gesamtzahl der Läufe in einer Sequenz. Wenn die Anzahl von positiven und negativen Läufen größer als 20 ist, wird die Verteilung von a durch eine Normalverteilung mit Mittelwert (2N - 1) 3 und (16N - 29) 90 angenähert. Lehnen Sie die Hypothese der Unabhängigkeit oder Existenz von Läufen ab Wenn Zo gt Z (1-alpha2) wobei Zo die Z-Punktzahl ist. C. Korrelationstests: Zeigen die Zufallszahlen eine erkennbare Korrelation Berechnen Sie die Autokorrelationsfunktion. Häufigkeits - oder gleichmäßiger Verteilungstest: Verwenden Sie den Kolmogorov-Smirimov-Test, um zu ermitteln, ob die Realisationen einem U folgen (0,1) References Weitere Messwerte: Headrick T. Polynomische Transformationen von fünfter Ordnung zur Erzeugung von univariaten und multivariaten nichtnormalen Verteilungen, Computational Statistics and Data Analysis . 40 (4), 685-711, 2002. Karian Z. und E. Dudewicz, Modern Statistical Systems und GPSS Simulation. CRC Press, 1998. Kleijnen J. und W. van Groenendaal, Simulation: Statistische Perspektive. Wiley, Chichester, 1992 Korn G. Real statistische Experimente können Simulation-Paket-Software, Simulation Modeling Practice und Theorie verwenden. 13 (1), 39-54, 2005. Lewis P. und E. Orav, Simulationsmethodik für Statistiker, Operationsanalysten und Ingenieure. Wadsworth Inc. 1989 Madu Ch. Und Ch-H. Kuei, experimentelle statistische Modelle und Analyse in Simulationsmodellierung. Greenwood Publishing Group, 1993. Pang K. Z. Yang, S. Hou und P. Leung, Ungleichförmige Zufallsvariate-Erzeugung durch die vertikale Streifenmethode, European Journal of Operational Research. 142 (3), 595-609, 2002. Robert C. und G. Casella, Monte Carlo Statistical Methods. Springer, 1999. Modellierung Simulation Simulation im Allgemeinen ist zu behaupten, dass man sich mit einer realen Sache beschäftigt, während wirklich mit einer Imitation. In der Operationsforschung ist die Imitation ein Computermodell der simulierten Realität. Ein Flugsimulator auf einem PC ist auch ein Computer-Modell für einige Aspekte des Fluges: es zeigt auf dem Bildschirm die Kontrollen und was der Pilot (der Jugendliche, die es betreibt) soll aus dem Cockpit (seinem Sessel) zu sehen. Warum Modelle verwenden Um einen Simulator zu fliegen ist sicherer und günstiger als das echte Flugzeug. Aus diesem Grund werden Modelle im Industrie - und Militärbereich eingesetzt: Es ist sehr kostspielig, gefährlich und oft unmöglich, Experimente mit realen Systemen durchzuführen. Vorausgesetzt, dass Modelle angemessene Beschreibungen der Wirklichkeit sind (sie sind gültig), kann das Experimentieren mit ihnen Geld, Leiden und sogar Zeit sparen. Wenn Simulationen verwendet werden, die sich mit der Zeit ändern, wie z. B. einer Tankstelle, in der Autos kommen und gehen (so genannte dynamische Systeme) und mit Zufälligkeit einhergehen. Niemand kann erraten, zu welcher Zeit das nächste Auto am Bahnhof ankommen sollte, sind gute Kandidaten für die Simulation. Die Modellierung komplexer dynamischer Systeme erfordert theoretisch zu viele Vereinfachungen, und die aufkommenden Modelle sind möglicherweise nicht gültig. Simulation erfordert nicht, dass viele Vereinfachung Annahmen, so dass es das einzige Werkzeug auch in Abwesenheit von Zufälligkeit. Wie zu simulieren Angenommen wir interessieren uns für eine Tankstelle. Wir können das Verhalten dieses Systems grafisch beschreiben, indem wir die Anzahl der Autos in der Station den Zustand des Systems darstellen. Jedes Mal, wenn ein Auto ankommt, erhöht sich der Graph um eine Einheit, während ein abreisender Wagen den Graphen dazu bringt, eine Einheit zu fallen. Dieser Graph (als Probenweg bezeichnet) konnte aus der Beobachtung einer realen Station gewonnen werden, konnte aber auch künstlich konstruiert werden. Diese künstliche Konstruktion und die Analyse des resultierenden Probenpfads (oder mehr Probenpfade in komplexeren Fällen) besteht aus der Simulation. Arten von Simulationen: Diskretes Ereignis. Der obige Abtastpfad bestand aus nur horizontalen und vertikalen Linien, da Autoankünfte und Abfahrten an verschiedenen Zeitpunkten auftraten, was wir als Ereignisse bezeichnen. Zwischen zwei aufeinanderfolgenden Ereignissen geschieht nichts - der Graph ist horizontal. Wenn die Anzahl der Ereignisse endlich ist, rufen wir das diskrete Simulationsereignis auf. In einigen Systemen ändert sich der Zustand ständig, nicht nur zum Zeitpunkt einiger diskreter Ereignisse. Beispielsweise kann sich der Wasserstand in einem Vorratsbehälter mit vorgegebenem Ein - und Auslauf ständig ändern. In solchen Fällen ist eine kontinuierliche Simulation geeigneter, obwohl die diskrete Ereignissimulation als Annäherung dienen kann. Weitere Betrachtung diskreter Ereignissimulationen. Wie wird Simulation durchgeführt Simulationen können manuell durchgeführt werden. In den meisten Fällen wird das Systemmodell jedoch entweder als Computerprogramm (zum Beispiel hier klicken) oder als eine Art Input in Simulator-Software geschrieben. Status: Eine Variable, die ein Attribut im System kennzeichnet, z. B. das Lagerbestandsniveau oder die Anzahl der Aufträge, die auf die Verarbeitung warten. Ereignis: Ein Vorkommen zu einem Zeitpunkt, der den Systemzustand ändern kann, z. B. die Ankunft eines Kunden oder den Beginn der Arbeit an einem Job. Entity: Ein Objekt, das durch das System geht, wie zum Beispiel Autos in einer Kreuzung oder Aufträge in einer Fabrik. Häufig ist ein Ereignis (z. B. Ankunft) mit einer Entität (z. B. Kunde) assoziiert. Warteschlange: Eine Warteschlange ist nicht nur eine physische Warteschlange von Personen, sondern kann auch eine Aufgabenliste sein, ein Puffer von fertigen Waren, die auf den Transport warten oder wo Entitäten auf etwas warten, das aus irgendeinem Grund geschieht. Erstellen: Erstellen erzeugt zu einem bestimmten Zeitpunkt eine neue Entität zum System. Scheduling: Scheduling ist der Vorgang, ein neues zukünftiges Event einer bestehenden Entity zuzuordnen. Zufallsvariable: Bei einer Zufallsvariablen handelt es sich um eine ungewisse Menge, wie z. B. Interarrivalzeit zwischen zwei eingehenden Flügen oder Anzahl defekter Teile in einer Sendung. Zufallsvariable: Eine Zufallsvariable ist eine künstlich erzeugte Zufallsvariable. Verteilung: Eine Verteilung ist das mathematische Gesetz, das die Wahrscheinlichkeitseigenschaften einer Zufallsvariablen regelt. Ein einfaches Beispiel: Aufbau einer Simulation Tankstelle mit einer einzigen Pumpe, die von einem einzigen Service-Mann. Angenommen, dass die Ankunft von Autos sowie ihre Servicezeiten zufällig sind. Zuerst identifizieren Sie die: Zustände: Anzahl der wartenden Autos und Anzahl der Autos, die zu jeder Zeit gedient werden Ereignisse: Ankunft der Autos, Dienstbeginn, Ende der Dienstleistungsentitäten: das sind die Warteschlange der Autos: die Warteschlange der Autos vor der Pumpe, Warten auf Service Zufällige Realisierungen: Interarrival Zeiten, Servicezeiten Distributionen: Wir nehmen exponentielle Verteilungen sowohl für die Interarrival Zeit und Service-Zeit. Geben Sie als Nächstes an, was bei jedem Ereignis zu tun ist. Das obige Beispiel würde wie folgt aussehen: Bei Ereignis der Entität Ankunft: Erstellen Sie die nächste Anreise. Wenn der Server frei ist, senden Sie Entität für Dienstbeginn. Ansonsten schließt er sich der Warteschlange an. Bei Dienstbeginn: Der Server wird belegt. Planen Sie das Ende des Dienstes für dieses Unternehmen ein. Bei Service-Ende: Server wird frei. Wenn irgendwelche Entitäten warten in der Warteschlange: entfernen Sie die erste Entität aus der Warteschlange senden Sie es für den Beginn des Dienstes. Eine Initialisierung ist noch erforderlich, beispielsweise die Erzeugung der ersten Ankunft. Schließlich wird das oben in Code übersetzt. Dies ist einfach mit einer entsprechenden Bibliothek, die Unterprogramme für die Erstellung, Planung, richtige Timing von Ereignissen, Warteschlange Manipulationen, zufällige Variate-Generierung und Statistik-Sammlung hat. Wie zu simulieren Neben den oben genannten, das Programm zeichnet die Anzahl der Autos in das System vor und nach jeder Änderung, zusammen mit der Länge der einzelnen Ereignisse. Entwicklung der Systemsimulation Diskrete Ereignissysteme (DES) sind dynamische Systeme, die sich zeitlich durch das Auftreten von Ereignissen in möglicherweise unregelmäßigen Zeitintervallen entwickeln. DES im praktischen Einsatz. Beispiele hierfür sind Verkehrssysteme, flexible Fertigungssysteme, Computerkommunikationssysteme, Fertigungsstraßen, kohärente Lifetime-Systeme und Strömungsnetze. Die meisten dieser Systeme können in Form von diskreten Ereignissen modelliert werden, deren Auftreten bewirkt, dass das System von einem Zustand in einen anderen übergeht. Bei der Entwicklung, Analyse und dem Betrieb solcher komplexen Systeme interessiert sich nicht nur die Leistungsbewertung, sondern auch die Sensitivitätsanalyse und - optimierung. Ein typisches stochastisches System weist eine große Anzahl von Steuerparametern auf, die einen erheblichen Einfluss auf die Leistungsfähigkeit des Systems haben können. To establish a basic knowledge of the behavior of a system under variation of input parameter values and to estimate the relative importance of the input parameters, sensitivity analysis applies small changes to the nominal values of input parameters. For systems simulation, variations of the input parameter values cannot be made infinitely small. The sensitivity of the performance measure with respect to an input parameter is therefore defined as (partial) derivative. Sensitivity analysis is concerned with evaluating sensitivities (gradients, Hessian, etc.) of performance measures with respect to parameters of interest. It provides guidance for design and operational decisions and plays a pivotal role in identifying the most significant system parameters, as well as bottleneck subsystems. I have carried out research in the fields of sensitivity analysis and stochastic optimization of discrete event systems with an emphasis on computer simulation models. This part of lecture is dedicated to the estimation of an entire response surface of complex discrete event systems (DES) from a single sample path (simulation), such as the expected waiting time of a customer in a queuing network, with respect to the controllable parameters of the system, such as service rates, buffer sizes and routing probabilities. With the response surfaces at hand, we are able to perform sensitivity analysis and optimization of a DES from a single simulation, that is, to find the optimal parameters of the system and their sensitivities (derivatives), with respect to uncontrollable system parameters, such as arrival rates in a queuing network. We identified three distinct processes. Descriptive Analysis includes: Problem Identification Formulation, Data Collection and Analysis, Computer Simulation Model Development, Validation, Verification and Calibration, and finally Performance Evaluation. Prescriptive Analysis: Optimization or Goal Seeking. These are necessary components for Post-prescriptive Analysis: Sensitivity, and What-If Analysis. The prescriptive simulation attempts to use simulation to prescribe decisions required to obtain specified results. It is subdivided into two topics - Goal Seeking and Optimization. Recent developments on single-run algorithms for the needed sensitivities (i. e. gradient, Hessian, etc.) make the prescriptive simulation feasible. Click on the image to enlarge it and THEN print it. Problem Formulation: Identify controllable and uncontrollable inputs. Identify constraints on the decision variables. Define measure of system performance and an objective function. Develop a preliminary model structure to interrelate the inputs and the measure of performance. Click on the image to enlarge it and THEN print it. Data Collection and Analysis: Regardless of the method used to collect the data, the decision of how much to collect is a trade-off between cost and accuracy. Simulation Model Development: Acquiring sufficient understanding of the system to develop an appropriate conceptual, logical and then simulation model is one of the most difficult tasks in simulation analysis. Model Validation, Verification and Calibration: In general, verification focuses on the internal consistency of a model, while validation is concerned with the correspondence between the model and the reality. The term validation is applied to those processes which seek to determine whether or not a simulation is correct with respect to the real system. More prosaically, validation is concerned with the question Are we building the right system. Verification, on the other hand, seeks to answer the question Are we building the system right Verification checks that the implementation of the simulation model (program) corresponds to the model. Validation checks that the model corresponds to reality. Calibration checks that the data generated by the simulation matches real (observed) data. Validation: The process of comparing the models output with the behavior of the phenomenon. In other words: comparing model execution to reality (physical or otherwise) Verification: The process of comparing the computer code with the model to ensure that the code is a correct implementation of the model. Calibration: The process of parameter estimation for a model. Calibration is a tweakingtuning of existing parameters and usually does not involve the introduction of new ones, changing the model structure. In the context of optimization, calibration is an optimization procedure involved in system identification or during experimental design. Input and Output Analysis: Discrete-event simulation models typically have stochastic components that mimic the probabilistic nature of the system under consideration. Successful input modeling requires a close match between the input model and the true underlying probabilistic mechanism associated with the system. The input data analysis is to model an element (e. g. arrival process, service times) in a discrete-event simulation given a data set collected on the element of interest. This stage performs intensive error checking on the input data, including external, policy, random and deterministic variables. System simulation experiment is to learn about its behavior. Careful planning, or designing, of simulation experiments is generally a great help, saving time and effort by providing efficient ways to estimate the effects of changes in the models inputs on its outputs. Statistical experimental-design methods are mostly used in the context of simulation experiments. Performance Evaluation and What-If Analysis: The what-if analysis is at the very heart of simulation models. Sensitivity Estimation: Users must be provided with affordable techniques for sensitivity analysis if they are to understand which relationships are meaningful in complicated models. Optimization: Traditional optimization techniques require gradient estimation. As with sensitivity analysis, the current approach for optimization requires intensive simulation to construct an approximate surface response function. Incorporating gradient estimation techniques into convergent algorithms such as Robbins-Monroe type algorithms for optimization purposes, will be considered. Gradient Estimation Applications: There are a number of applications which measure sensitivity information, (i. e. the gradient, Hessian, etc.), Local information, Structural properties, Response surface generation, Goal-seeking problem, Optimization, What-if Problem, and Meta-modelling Report Generating: Report generation is a critical link in the communication process between the model and the end user. A Classification of Stochastic Processes A stochastic process is a probabilistic model of a system that evolves randomly in time and space. Formally, a stochastic process is a collection of random variables all defined on a common sample (probability) space. The X(t) is the state while (time) t is the index that is a member of set T. Examples are the delay of the ith customer and number of customers in the queue at time t in an MM1 queue. In the first example, we have a discrete - time, continuous state, while in the second example the state is discrete and time in continuous. The following table is a classification of various stochastic processes. The man made systems have mostly discrete state. Monte Carlo simulation deals with discrete time while in discrete even system simulation the time dimension is continuous, which is at the heart of this site. Change in the States of the System A Classification of Stochastic Processes Simulation Output Data and Stochastic Processes To perform statistical analysis of the simulation output we need to establish some conditions, e. g. output data must be a covariance stationary process (e. g. the data collected over n simulation runs). Stationary Process (strictly stationary): A stationary stochastic process is a stochastic process with the property that the joint distribution all vectors of h dimension remain the same for any fixed h. First Order Stationary: A stochastic process is a first order stationary if expected of X(t) remains the same for all t. For example in economic time series, a process is first order stationary when we remove any kinds of trend by some mechanisms such as differencing. Second Order Stationary: A stochastic process is a second order stationary if it is first order stationary and covariance between X(t) and X(s) is function of t-s only. Again, in economic time series, a process is second order stationary when we stabilize also its variance by some kind of transformations such as taking square root. Clearly, a stationary process is a second order stationary, however the reverse may not hold. In simulation output statistical analysis we are satisfied if the output is covariance stationary . Covariance Stationary: A covariance stationary process is a stochastic process having finite second moments, i. e. expected of X(t) 2 be finite. Clearly, any stationary process with finite second moment is covariance stationary. A stationary process may have no finite moment whatsoever. Since a Gaussian process needs a mean and covariance matrix only, it is stationary (strictly) if it is covariance stationary. Two Contrasting Stationary Process: Consider the following two extreme stochastic processes: - A sequence Y 0 . Y 1 . of independent identically distributed, random-value sequence is a stationary process, if its common distribution has a finite variance then the process is covariance stationary. - Let Z be a single random variable with known distribution function, and set Z 0 Z 1 . Z. Note that in a realization of this process, the first element, Z 0, may be random but after that there is no randomness. The process i . i 0, 1, 2. is stationary if Z has a finite variance. Output data in simulation fall between these two type of process. Simulation outputs are identical, and mildly correlated (how mild It depends on e. g. in a queueing system how large is the traffic intensity r ). An example could be the delay process of the customers in a queueing system. Techniques for the Steady State Simulation Unlike in queuing theory where steady state results for some models are easily obtainable, the steady state simulation is not an easy task. The opposite is true for obtaining results for the transient period (i. e. the warm-up period). Gather steady state simulation output requires statistical assurance that the simulation model reached the steady state. The main difficulty is to obtain independent simulation runs with exclusion of the transient period. The two technique commonly used for steady state simulation are the Method of Batch means, and the Independent Replication. None of these two methods is superior to the other in all cases. Their performance depend on the magnitude of the traffic intensity. The other available technique is the Regenerative Method, which is mostly used for its theoretical nice properties, however it is rarely applied in actual simulation for obtaining the steady state output numerical results. Suppose you have a regenerative simulation consisting of m cycles of size n 1 . n 2,n m . beziehungsweise. The cycle sums is: The overall estimate is: Estimate S y i S n i . the sums are over i1, 2. m The 100(1- a 2) confidence interval using the Z-table (or T-table, for m less than, say 30), is: Estimate 177 Z. S (n. m ) n S n i m, the sum is over i1, 2. m and the variance is: S 2 S (y i - n i . Estimate) 2 (m-1), the sum is over i1, 2. m Method of Batch Means: This method involves only one very long simulation run which is suitably subdivided into an initial transient period and n batches. Each of the batch is then treated as an independent run of the simulation experiment while no observation are made during the transient period which is treated as warm-up interval. Choosing a large batch interval size would effectively lead to independent batches and hence, independent runs of the simulation, however since number of batches are few on cannot invoke the central limit theorem to construct the needed confidence interval. On the other hand, choosing a small batch interval size would effectively lead to significant correlation between successive batches therefore cannot apply the results in constructing an accurate confidence interval. Suppose you have n equal batches of m observations each. The means of each batch is: mean i S x ij m, the sum is over j1, 2. m The overall estimate is: Estimate S mean i n, the sum is over i1, 2. n The 100(1- a 2) confidence interval using the Z-table (or T-table, for n less than, say 30), is: Estimate 177 Z. S where the variance is: S 2 S (mean i - Estimate) 2 (n-1), the sum is over i1, 2. n Method of Independent Replications: This method is the most popularly used for systems with short transient period. This method requires independent runs of the simulation experiment different initial random seeds for the simulators random number generator. For each independent replications of the simulation run it transient period is removed. For the observed intervals after the transient period data is collected and processed for the point estimates of the performance measure and for its subsequent confidence interval. Suppose you have n replications with of m observations each. The means of each replication is: mean i S x ij m, the sum is over j1, 2. m The overall estimate is: Estimate S mean i n, the sum is over i1, 2. n The 100(1- a 2) confidence interval using the Z-table (or T-table, for n less than, say 30), is: Estimate 177 Z. S where the variance is: S 2 S (mean i - Estimate) 2 (n-1), the sum is over i1, 2. n Further Reading: Sherman M. and D. Goldsman, Large-sample normality of the batch-means variance estimator, Operations Research Letters . 30, 319-326, 2002. Whitt W. The efficiency of one long run versus independent replications in steady-state simulation, Management Science . 37(6), 645-666, 1991. Determination of the Warm-up Period To estimate the long-term performance measure of the system, there are several methods such as Batch Means, Independent Replications and Regenerative Method. Batch Means is a method of estimating the steady-state characteristic from a single-run simulation. The single run is partitioned into equal size batches large enough for estimates obtained from different batches to be approximately independent. In the method of Batch Means, it is important to ensure that the bias due to initial conditions is removed to achieve at least a covariance stationary waiting time process. An obvious remedy is to run the simulation for a period large enough to remove the effect of the initial bias. During this warm-up period, no attempt is made to record the output of the simulation. The results are thrown away. At the end of this warm-up period, the waiting time of customers are collected for analysis. The practical question is How long should the warm-up period be. Abate and Whitt provided a relatively simple and nice expression for the time required (t p ) for an MM1 queue system (with traffic intensity r ) starting at the origin (empty) to reach and remain within 100p of the steady - state limit as follows: C( r )2 r ( r 2 4 r ) 4. Some notions of t p ( r ) as a function of r and p, are given in following table: Time ( t p ) required for an MM1 queue to reach and remain with 100p limits of the steady-state value. Although this result is developed for MM1 queues, it has already been established that it can serve as an approximation for more general i. e. GIG1 queues. Further Reading: Abate J. and W. Whitt, Transient behavior of regular Brownian motion, Advance Applied Probability . 19, 560-631, 1987. Chen E. and W. Kelton, Determining simulation run length with the runs test, Simulation Modelling Practice and Theory . 11, 237-250, 2003. Determination of the Desirable Number of Simulation Runs The two widely used methods for experimentation on simulation models are method of bath means, and independent replications. Intuitively one may say the method of independent replication is superior in producing statistically a good estimate for the systems performance measure. In fact, not one method is superior in all cases and it all depends on the traffic intensity r. After deciding what method is more suitable to apply, the main question is determination of number of runs. That is, at the planning stage of a simulation investigation of the question of number of simulation runs (n) is critical. The confidence level of simulation output drawn from a set of simulation runs depends on the size of data set. The larger the number of runs, the higher is the associated confidence. However, more simulation runs also require more effort and resources for large systems. Thus, the main goal must be in finding the smallest number of simulation runs that will provide the desirable confidence. Pilot Studies: When the needed statistics for number of simulation runs calculation is not available from existing database, a pilot simulation is needed. For large pilot simulation runs (n), say over 30, the simplest number of runs determinate is: where d is the desirable margin of error (i. e. the absolute error), which is the half-length of the confidence interval with 100(1- a ) confidence interval. S 2 is the variance obtained from the pilot run. One may use the following sample size determinate for a desirable relative error D in , which requires an estimate of the coefficient of variation (C. V. in ) from a pilot run with n over 30: These sample size determinates could also be used for simulation output estimation of unimodal output populations, with discrete or continuous random variables provided the pilot run size (n) is larger than (say) 30. The aim of applying any one of the above number of runs determinates is at improving your pilot estimates at feasible costs. You may like using the following Applet for determination of number of runs. Further Reading: Daz-Emparanza I, Is a small Monte Carlo analysis a good analysis Checking the size power and consistency of a simulation-based test, Statistical Papers . 43(4), 567-577, 2002. Whitt W. The efficiency of one long run versus independent replications in steady-state simulation, Management Science . 37(6), 645-666, 1991. Determination of Simulation Runs Size At the planning stage of a simulation modeling the question of number of simulation runs (n) is critical. The following Java applets compute the needed Runs Size based on current avialable information ontained from a pilot simulation run, to achieve an acceptable accuracy andor risk. Enter the needed information, and then click the Calculate button. The aim of applying any one of the following number of simulation runs determinates is at improving your pilot estimates at a feasible cost. Notes: The normality condition might be relaxed for number of simulation runs over, say 30. Moreover, determination of number of simulation runs for mean could also be used for other unimodal simulation output distributions including those with discrete random variables, such as proportion, provided the pilot run is sufficiently large (say, over 30). Runs Size with Acceptable Absolute Precision Simulation Software Selection The vast amount of simulation software available can be overwhelming for the new users. The following are only a random sample of software in the market today: ACSL, APROS, ARTIFEX, Arena, AutoMod, CSIM, CSIM, Callim, FluidFlow, GPSS, Gepasi, JavSim, MJX, MedModel, Mesquite, Multiverse, NETWORK, OPNET Modeler, POSES, Simulat8, Powersim, QUEST, REAL, SHIFT, SIMPLE, SIMSCRIPT, SLAM, SMPL, SimBank, SimPlusPlus, TIERRA, Witness, SIMNON, VISSIM, and javasim. There are several things that make an ideal simulation package. Some are properties of the package, such as support, reactivity to bug notification, interface, etc. Some are properties of the user, such as their needs, their level of expertise, etc. For these reasons asking which package is best is a sudden failure of judgment. The first question to ask is for what purpose you need the software Is it for education, teaching, student-projects or research The main question is: What are the important aspects to look for in a package The answer depends on specific applications. However some general criteria are: Input facilities, Processing that allows some programming, Optimization capability, Output facilities, Environment including training and support services, Input-output statistical data analysis capability, and certainly the Cost factor. You must know which features are appropriate for your situation, although, this is not based on a Yes or No judgment. For description of available simulation software, visit Simulation Software Survey. Reference Further Reading: Nikoukaran J. Software selection for simulation in manufacturing: A review, Simulation Practice and Theory . 7(1), 1-14, 1999. Animation in Systems Simulation Animation in systems simulation is a useful tool. Most graphically based software packages have default animation. This is quite useful for model debugging, validation, and verification. This type of animation comes with little or no additional effort and gives the modeler additional insight into how the model. This type of animation comes with little or no additional effort and gives the modeler additional insight into how the model works. However, it augments the modeling tools available. The more realistic animation presents qualities which intend to be useful to the decision-maker in implementing the developed simulation model. There are also, good model management tools. Some tools have been developed which combined a database with simulation to store models, data, results, and animations. However, there is not one product that provides all of those capabilities. SIMSCRIPT II.5 Without computer one cannot perform any realistic dynamic systems simulation. SIMSCRIPT II.5 is a powerful, free-format, English-like simulation language designed to greatly simplify writing programs for simulation modelling. Programs written in SIMSCRIPT II.5 are easily read and maintained. They are accurate, efficient, and generate results which are acceptable to users. Unlike other simulation programming languages, SIMSCRIPT II.5 requires no coding in other languages. SIMSCRIPT II.5 has been fully supported for over 33 years. Contributing to the wide acceptance and success of SIMSCRIPT II.5 modelling are: A powerful worldview, consisting of Entities and Processes, provides a natural conceptual framework with which to relate real objects to the model. SIMSCRIPT II.5 is a modern, free-form language with structured programming constructs and all the built-in facilities needed for model development. Model components can be programmed so they clearly reflect the organization and logic of the modeled system. The amount of program needed to model a system is typically 75 less than its FORTRAN or C counterpart. A well designed package of program debug facilities is provided. The required tools are available to detect errors in a complex computer program without resorting an error. Simulation status information is provided, and control is optionally transferred to a user program for additional analysis and output. This structure allows the model to evolve easily and naturally from simple to detailed formulation as data becomes available. Many modifications, such as the choice of set disciplines and statistics are simply specified in the Preamble. You get a powerful, English-like language supporting a modular implementation. Because each model component is readable and self-contained, the model documentation is the model listing it is never obsolete or inaccurate. For more information contact SIMSCRIPT Guidelines for Running SIMSCRIPT on the VAX System System Dynamics and Discrete Event Simulation The modeling techniques used by system dynamics and discrete event simulations are often different at two levels: The modeler way of representing systems might be different, the underlying simulators algorithms are also different. Each technique is well tuned to the purpose it is intended. However, one may use a discrete event approach to do system dynamics and vice versa. Traditionally, the most important distinction is the purpose of the modeling. The discrete event approach is to find, e. g. how many resources the decision maker needs such as how many trucks, and how to arrange the resources to avoid bottlenecks, i. e. excessive of waiting lines, waiting times, or inventories. While the system dynamics approach is to prescribe for the decision making to, e. g. timely respond to any changes, and how to change the physical structure, e. g. physical shipping delay time, so that inventories, sales, production, etc. System dynamics is the rigorous study of problems in system behavior using the principles of feedback, dynamics and simulation. In more words system dynamics is characterized by: Searching for useful solutions to real problems, especially in social systems (businesses, schools, governments. ) and the environment. Using computer simulation models to understand and improve such systems. Basing the simulation models on mental models, qualitative knowledge and numerical information. Using methods and insights from feedback control engineering and other scientific disciplines to assess and improve the quality of models. Seeking improved ways to translate scientific results into achieved implemented improvement. Systems dynamics approach looks at systems at a very high level so is more suited to strategic analysis. Discrete event approach may look at subsystems for a detailed analysis and is more suited, e. g. to process re-engineering problems. Systems dynamics is indicative, i. e. helps us understand the direction and magnitude of effects (i. e. where in the system do we need to make the changes), whereas discrete event approach is predictive (i. e. how many resources do we need to achieve a certain goal of throughout). Systems dynamics analysis is continuous in time and it uses mostly deterministic analysis, whereas discrete event process deals with analysis in a specific time horizon and uses stochastic analysis. Some interesting and useful areas of system dynamics modeling approach are: Short-term and long term forecasting of agricultural produce with special reference to field crops and perennial fruits such as grapes, which have significant processing sectors of different proportions of total output where both demand and supply side perspectives are being considered. Long term relationship between the financial statements of balance sheet, income statement and cash flow statement balanced against scenarios of the stock markets need to seek a stablegrowing share price combined with a satisfactory dividend and related return on shareholder funds policy. Managerial applications include the development and evaluation of short-term and long-term strategic plans, budget analysis and assessment, business audits and benchmarking. A modeler must consider both as complementary tools to each other. Systems dynamic to look at the high level problem and identify areas which need more detailed analysis. Then, use discrete event modeling tools to analyze (and predict) the specific areas of interest. What Is Social Simulation Social scientists have always constructed models of social phenomena. Simulation is an important method for modeling social and economic processes. In particular, it provides a middle way between the richness of discursive theorizing and rigorous but restrictive mathematical models. There are different types of computer simulation and their application to social scientific problems. Faster hardware and improved software have made building complex simulations easier. Computer simulation methods can be effective for the development of theories as well as for prediction. For example, macro-economic models have been used to simulate future changes in the economy and simulations have been used in psychology to study cognitive mechanisms. The field of social simulation seems to be following an interesting line of inquiry. As a general approach in the field, a world is specified with much computational detail. Then the world is simulated (using computers) to reveal some of the non-trivial implications (or emergent properties) of the world. When these non trivial implications are made known (fed back) in world, apparently it constitutes some added values. Artificial Life is an interdisciplinary study enterprise aimed at understanding life-as-it-is and life-as-it-could-be, and at synthesizing life-like phenomena in chemical, electronic, software, and other artificial media. Artificial Life redefines the concepts of artificial and natural, blurring the borders between traditional disciplines and providing new media and new insights into the origin and principles of life. Simulation allows the social scientist to experiment with artificial societies and explore the implications of theories in ways not otherwise possible. Reference and Further Readings: Gilbert N. and K. Troitzsch, Simulation for the Social Scientist . Open University Press, Buckingham, UK, 1999. Sichman J. R. Conte, and N. Gilbert, (eds,), Multi-Agent Systems and Agent-Based Simulation . Berlin, Springer-Verlag, 1998. What Is Web-based Simulation Web-based simulation is quickly emerging as an area of significant interest for both simulation researchers and simulation practitioners. This interest in web-based simulation is a natural outgrowth of the proliferation of the World-Wide Web and its attendant technologies, e. g. HTML, HTTP, CGI, etc. Also the surging popularity of, and reliance upon, computer simulation as a problem solving and decision support systems tools. The appearance of the network-friendly programming language, Java, and of distributed object technologies like the Common Object Request Broker Architecture (CORBA) and the Object Linking and Embedding Component Object Model (OLECOM) have had particularly acute effects on the state of simulation practice. Currently, the researchers in the field of web-based simulation are interested in dealing with topics such as methodologies for web-based model development, collaborative model development over the Internet, Java-based modeling and simulation, distributed modeling and simulation using web technologies, and new applications. Parallel and Distributed Simulation The increasing size of the systems and designs requires more efficient simulation strategies to accelerate the simulation process. Parallel and distributed simulation approaches seem to be a promising approach in this direction. Current topics under extensive research are: Synchronization, scheduling, memory management, randomized and reactiveadaptive algorithms, partitioning and load balancing. Synchronization in multi-user distributed simulation, virtual reality environments, HLA, and interoperability. System modeling for parallel simulation, specification, re-use of modelscode, and parallelizing existing simulations. Language and implementation issues, models of parallel simulation, execution environments, and libraries. Theoretical and empirical studies, prediction and analysis, cost models, benchmarks, and comparative studies. Computer architectures, VLSI, telecommunication networks, manufacturing, dynamic systems, and biologicalsocial systems. Web based distributed simulation such as multimedia and real time applications, fault tolerance, implementation issues, use of Java, and CORBA. References Further Readings: Bossel H. Modeling Simulation . A. K. Peters Pub. 1994. Delaney W. and E. Vaccari, Dynamic Models and Discrete Event Simulation . Dekker, 1989. Fishman G. Discrete-Event Simulation: Modeling, Programming and Analysis . Springer-Verlag, Berlin, 2001. Fishwick P. Simulation Model Design and Execution: Building Digital Worlds . Prentice-Hall, Englewood Cliffs, 1995. Ghosh S. and T. Lee, Modeling Asynchronous Distributed Simulation: Analyzing Complex Systems . IEEE Publications, 2000. Gimblett R. Integrating Geographic Information Systems and Agent-Based Modeling: Techniques for Simulating Social and Ecological Processes . Oxford University Press, 2002. Harrington J. and K. Tumay, Simulation Modeling Methods: An Interactive Guide to Results-Based Decision . McGraw-Hill, 1998. Haas P. Stochastic Petri Net Models Modeling and Simulation . Springer Verlag, 2002. Hill D. Object-Oriented Analysis and Simulation Modeling . Addison-Wesley, 1996. Kouikoglou V. and Y. Phillis, Hybrid Simulation Models of Production Networks . Kluwer Pub. 2001. Law A. and W. Kelton, Simulation Modeling and Analysis . McGraw-Hill, 2000. Nelson B. Stochastic Modeling: Analysis Simulation . McGraw-Hill, 1995. Oakshott L., Business Modelling and Simulation . Pitman Publishing, London, 1997. Pidd M. Computer Simulation in Management Science . Wiley, 1998. Rubinstein R. and B. Melamed, Modern Simulation and Modeling . Wiley, 1998. Severance F. System Modeling and Simulation: An Introduction . Wiley, 2001. Van den Bosch, P. and A. Van der Klauw, Modeling, Identification Simulation of Dynamical Systems . CRC Press, 1994. Woods R. and K. Lawrence, Modeling and Simulation of Dynamic Systems . Prentice Hall, 1997. Techniques for Sensitivity Estimation Simulation continues to be the primary method by which engineers and managers obtain information about complex stochastic systems, such as telecommunication networks, health service, corporate planning, financial modeling, production assembly lines, and flexible manufacturing systems. These systems are driven by the occurrence of discrete events and complex interactions within these discrete events occur over time. For most discrete event systems (DES) no analytical methods are available, so DES must be studied via simulation. DES are studied to understand their performance, and to determine the best ways to improve their performance. In particular, one is often interested in how system performance depends on the systems parameter v, which could be a vector. DESs system performance is often measured as an expected value. Consider a system with continuous parameter v 206 V 205 R n . where V is an open set. Let be the steady state expected performance measure, where Y is a random vector with known probability density function (pdf), f(y v) depends on v, and Z is the performance measure. In discrete event systems, Monte Carlo simulation is usually needed to estimate J(v) for a given value v v 0 . By the law of large numbers converges to the true value, where y i . i 1, 2. n are independent, identically distributed, random vector realizations of Y from f (y v 0 ), and n is the number of independent replications. We are interested in sensitivities estimation of J(v) with respect to v. Applications of sensitivity information There are a number of areas where sensitivity information (the gradient, Hessian, etc.) of a performance measure J(v) or some estimate of it, is used for the purpose of analysis and control. In what follows, we single out a few such areas and briefly discuss them. Local information: An estimate for dJdv is a good local measure of the effect of on performance. For example, simply knowing the sign of the derivative dJdv at some point v immediately gives us the direction in which v should be changed. The magnitude of dJd also provides useful information in an initial design process: If dJdv is small, we conclude that J is not very sensitive to changes in. and hence focusing concentration on other parameters may improve performance. Structural properties: Often sensitivity analysis provides not only a numerical value for the sample derivative, but also an expression which captures the nature of the dependence of a performance measure on the parameter v. The simplest case arises when dJdv can be seen to be always positive (or always negative) for any sample path we may not be able to tell if the value of J(v) is monotonically increasing (or decreasing) in v. This information in itself is very useful in design and analysis. More generally, the form of dJdv can reveal interesting structural properties of the DES (e. g. monotonicity, convexity). Such properties must be exploited in order to determine optimal operating policies for some systems. Response surface generation: Often our ultimate goal is to obtain the function J(v), i. e. a curve describing how the system responds to different values of v. Since J(v) is unknown, one alternative is to obtain estimates of J(v) for as many values of v as possible. This is clearly a prohibitively difficult task. Derivative information, however may include not only first-order but also higher derivatives which can be used to approximate J(v). If such derivative information can be easily and accurately obtained, the task of response surface generation may be accomplished as well. Goal-seeking and What-if problems: Stochastic models typically depend upon various uncertain parameters that must be estimated from existing data sets. Statistical questions of how input parameter uncertainty propagates through the model into output parameter uncertainty is the so-called what-if analysis. A good answer to this question often requires sensitivity estimates. The ordinary simulation output results are the solution of a direct problem: Given the underlying pdf with a particular parameter value v. we may estimate the output function J(v). Now we pose the goal-seeking problem: given a target output value J 0 of the system and a parameterized pdf family, find an input value for the parameter, which generates such an output. There are strong motivations for both problems. When v is any controllable or uncontrollable parameter the decision maker is, for example, interested in estimating J(v) for a small change in v , the so called what-if problem, which is a direct problem and can be solved by incorporating sensitivity information in the Taylors expansion of J(v) in the neighborhood of v. However, when v is a controllable input, the decision maker may be interested in the goal-seeking problem: what change in the input parameter will achieve a desired change in output value J(v). Another application of goal-seeking arises when we want to adapt a model to satisfy a new equality constraint (condition) for some stochastic function. The solution to the goal-seeking problem is to estimate the derivative of the output function with respect to the input parameter for the nominal system use this estimate in a Taylors expansion of the output function in the neighborhood of the parameter and finally, use Robbins-Monro (R-M) type of stochastic approximation algorithm to estimate the necessary controllable input parameter value within the desired accuracy. Optimization: Discrete-event simulation is the primary analysis tool for designing complex systems. However, simulation must be linked with a mathematical optimization technique to be effectively used for systems design. The sensitivity dJdv can be used in conjunction with various optimization algorithms whose function is to gradually adjust v until a point is reached where J(v) is maximized (or minimized). If no other constraints on v are imposed, we expect dJdv 0 at this point. Click on the image to enlarge it and THEN print it. Finite difference approximation Kiefer and Wolfowitz proposed a finite difference approximation to the derivative. One version of the Kiefer-Wolfwitz technique uses two-sided finite differences. The first fact to notice about the K-W estimate is that it requires 2N simulation runs, where N is the dimension of vector parameter q. If the decision maker is interested in gradient estimation with respect to each of the components of q. then 2N simulations must be run for each component of v. This is inefficient. The second fact is that it may have a very poor variance, and it may result in numerical calculation difficulties. Simultaneous perturbation methods The simultaneous perturbation (SP) algorithm introduced by Dr. J. Spall has attracted considerable attention. There has recently been much interest in recursive optimization algorithms that rely on measurements of only the objective function to be optimized, not requiring direct measurements of the gradient of the objective function. Such algorithms have the advantage of not requiring detailed modeling information describing the relationship between the parameters to be optimized and the objective function. For example, many systems involving complex simulations or human beings are difficult to model, and could potentially benefit from such an optimization approach. The simultaneous perturbation stochastic approximation (SPSA) algorithm operates in the same framework as the above K-W methods, but has the strong advantage of requiring a much lower number of simulation runs to obtain the same quality of result. The essential feature of SPSA, which accounts for its power and relative ease of use in difficult multivariate optimization problems--is the underlying gradient approximation that requires only TWO objective function measurements regardless of the dimension of the optimization problem (one variation of basic SPSA uses only ONE objective function measurement per iteration). The underlying theory for SPSA shows that the N-fold savings in simulation runs per iteration (per gradient approximation) translates directly into an N-fold savings in the number of simulations to achieve a given quality of solution to the optimization problem. In other words, the K-W method and SPSA method take the same number of iterations to converge to the answer despite the N-fold savings in objective function measurements (e. g. simulation runs) per iteration in SPSA. Perturbation analysis Perturbation analysis (PA) computes (roughly) what simulations would have produced, had v been changed by a small amount without actually making this change. The intuitive idea behind PA is that a sample path constructed using v is frequently structurally very similar to the sample path using the perturbed v. There is a large amount of information that is the same for both of them. It is wasteful to throw this information away and to start the simulation from scratch with the perturbed v. In PA, moreover, we can let the change approach zero to get a derivative estimator without numerical problems. We are interested in the affect of a parameter change on the performance measure. However, we would like to realize this change by keeping the order of events exactly the same. The perturbations will be so small that only the duration, not the order, of the states will be affected. This effect should be observed in three successive stages: Step 1: How does a change in the value of a parameter vary the sample duration related to that parameter Step 2: How does the change in an individual sample duration reflect itself as a change in a subsequent particular sample realization Step 3: Finally, what is the relationship between the variation of the sample realization and its expected value Score function methods Using the score function method, the gradient can be estimated simultaneously, at any number of different parameter values, in a single-run simulation. The basic idea is that, the gradient of the performance measure function, J( v ), is expressed as an expectation with respect to the same distribution as the performance measure function itself. Therefore, the sensitivity information can be obtained with little computational (not simulation) cost, while estimating the performance measure. It is well-known that the crude form of the SF estimator suffers from the problem of linear growth in its variance as the simulation run increases. However, in the steady-state simulation the variance can be controlled by run length. Furthermore, information about the variance may be incorporated into the simulation algorithm. A recent flurry of activity has attempted to improve the accuracy of the SF estimates. Under regenerative conditions, the estimator can easily be modified to alleviate this problem, yet the magnitude of the variance may be large for queueing systems with heavy traffic intensity. The heuristic idea is to treat each component of the system (e. g. each queue) separately, which synchronously assumes that individual components have local regenerative cycles. This approach is promising since the estimator remains unbiased and efficient while the global regenerative cycle is very long. Now we look at the general (non-regenerative) case. In this case any simulation will give a biased estimator of the gradient, as simulations are necessarily finite. If n (the length of the simulation) is large enough, this bias is negligible. However, as noted earlier, the variance of the SF sensitivity estimator increases with increase in n so, a crude SF estimator is not even approximately consistent. There are a number of ways to attack this problem. Most of the variations in an estimator comes from the score function. The variation is especially high, when all past inputs contribute to the performance and the scores from all are included. When one uses batch means, the variation is reduced by keeping the length of the batch small. A second way is to reduce the variance of the score to such an extent that we can use simulations long enough to effectively eliminate the bias. This is the most promising approach. The variance may be reduced further by using the standard variance reduction techniques (VRT), such as importance sampling. Finally, we can simply use a large number of iid replications of the simulation. Harmonic analysis Another strategy for estimating the gradient simulation is based on the frequency domain method, which differs from the time domain experiments in that the input parameters are deterministically varied in sinusoidal patterns during the simulation run, as opposed to being kept fixed as in the time domain runs. The range of possible values for each input factor should be identified. Then the values of each input factor within its defined range should be changed during a run. In time series analysis, t is the time index. In simulation, however, t is not necessarily the simulation clock time. Rather, t is a variable of the model, which keeps track of certain statistics during each run. For example, to generate the inter-arrival times in a queueing simulation, t might be the variable that counts customer arrivals. Frequency domain simulation experiments identify the significant terms of the polynomial that approximates the relationship between the simulation output and the inputs. Clearly, the number of simulation runs required to identify the important terms by this approach is much smaller than those of the competing alternatives, and the difference becomes even more conspicuous as the number of parameters increases. Conclusions Further Readings PA and SF (or LR) can be unified. Further comparison of the PA and SF approaches reveals several interesting differences. Both approaches require an interchange of expectation and differentiation. However, the conditions for this interchange in PA depend heavily on the nature of the problem, and must be verified for each application, which is not the case in SF. Therefore, in general, it is easier to satisfy SF unbiased conditions. PA assumes that the order of events in the perturbed path is the same as the order in the nominal path, for a small enough change in v. allowing the computation of the sensitivity of the sample performance for a particular simulation. For example, if the performance measure is the mean number of customer in a busy period, the PA estimate of the gradient with respect to any parameter is zero The number of customers per busy period will not change if the order of events does not change. In terms of ease of implementation, PA estimators may require considerable analytical work on the part of algorithm developer, with some customization for each application, whereas SF has the advantage of remaining a general definable algorithm whenever it can be applied. Perhaps the most important criterion for comparison lies in the question of accuracy of an estimator, typically measured through its variance. If an estimator is strongly consistent, its variance is gradually reduced over time and ultimately approaches to zero. The speed with which this happens may be extremely important. Since in practice, decisions normally have to be made in a limited time, an estimator whose variance decreases fast is highly desirable. In general, when PA does provide unbiased estimators, the variance of these estimators is small. PA fully exploits the structure of DES and their state dynamics by extracting the needed information from the observed sample path, whereas SF requires no knowledge of the system other than the inputs and the outputs. Therefore when using SF methods, variance reduction is necessary. The question is whether or not the variance can be reduced enough to make the SF estimator useful in all situations to which it can be applied. The answer is certainly yes. Using the standard variance reduction techniques can help, but the most dramatic variance reduction occurs using new methods of VR such as conditioning, which is shown numerically to have a mean squared error that is essentially the same as that of PA. References Further Readings: Arsham H. Algorithms for Sensitivity Information in Discrete-Event Systems Simulation, Simulation Practice and Theory . 6(1), 1-22, 1998. Fu M. and J-Q. Hu, Conditional Monte Carlo: Gradient Estimation and Optimization Applications . Kluwer Academic Publishers, 1997. Rubinstein R. and A. Shapiro, Discrete Event Systems: Sensitivity Analysis and Stochastic Optimization by the Score Function Method . John Wiley Sons, 1993. Whitt W. Minimizing delays in the GIG1 queue, Operations Research . 32(1), 41-51, 1984. Simulation-based Optimization Techniques Discrete event simulation is the primary analysis tool for designing complex systems. Simulation, however, must be linked with a optimization techniques to be effectively used for systems design. We present several optimization techniques involving both continuous and discrete controllable input parameters subject to a variety of constraints. The aim is to determine the techniques most promising for a given simulation model. Many man-made systems can be modeled as Discrete Event Systems (DES) examples are computer systems, communication networks, flexible manufacturing systems, production assembly lines, and traffic transportation systems. DES evolve with the occurrence of discrete events, such as the arrival of a job or the completion of a task, in contrast with continuously variable dynamic processes such as aerospace vehicles, which are primarily governed by differential equations. Owing to the complex dynamics resulting from stochastic interactions of such discrete events over time, the performance analysis and optimization of DES can be difficult tasks. At the same time, since such systems are becoming more widespread as a result of modern technological advances, it is important to have tools for analyzing and optimizing the parameters of these systems. Analyzing complex DES often requires computer simulation. In these systems, the objective function may not be expressible as an explicit function of the input parameters rather, it involves some performance measures of the system whose values can be found only by running the simulation model or by observing the actual system. On the other hand, due to the increasingly large size and inherent complexity of most man-made systems, purely analytical means are often insufficient for optimization. In these cases, one must resort to simulation, with its chief advantage being its generality, and its primary disadvantage being its cost in terms of time and money. Even though, in principle, some systems are analytically tractable, the analytical effort required to evaluate the solution may be so formidable that computer simulation becomes attractive. While the price for computing resources continue to dramatically decrease, one nevertheless can still obtain only a statistical estimate as opposed to an exact solution. For practical purposes, this is quite sufficient. These man-made DES are costly, and therefore it is important to operate them as efficiently as possible. The high cost makes it necessary to find more efficient means of conducting simulation and optimizing its output. We consider optimizing an objective function with respect to a set of continuous andor discrete controllable parameters subject to some constraints. Click on the image to enlarge it and THEN print it. The above figure illustrates the feedback loop application. Although the feedback concept is not a simulation but a systemic concept, however, whatever paradigm we use one can always incorporate feedback. For example, consider a discrete event system (DES) model that employs resources to achieve certain tasksprocesses, by only incorporating decision rules regarding how to manage the stocks and thence how the resource will be deployed depending on the stock level, clearly, in the system structure there are feedback loops. Usually when modelers choose a DES approach they often model the system as open loop or nearly open loop system, making the system behave as if there where no superior agent controlling the whole productionservice process. Closing the loops should be an elemental task that simulation modeler should take care of, even if the scope does not involve doing it, there must be awareness of system behavior, particularly if there is known to be that the system if under human decision making processesactivities. In almost all simulation models, an expected value can express the systems performance. Consider a system with continuous parameter v 206 V, where V is the feasible region. Let be the steady state expected performance measure, where Y is a random vector with known probability density function (pdf), f(y v) depends on v, and Z is the performance measure. In discrete event systems, Monte Carlo simulation is usually needed to estimate J(v) for a given value v v 0 . By the law of large numbers converges to the true value, where y i . i 1, 2. n are independent, identically distributed, random vector realizations of Y from f (y v 0 ), and n is the number of independent replications. The aim is to optimize J(v) with respect to v. We shall group the optimization techniques for simulation into seven broad categories namely, Deterministic Search, Pattern Search, Probabilistic Search, Evolutionary Techniques, Stochastic Approximation, Gradient Surface, and some Mixtures of the these techniques Click on the image to enlarge it and THEN print it. Deterministic search techniques A common characteristic of deterministic search techniques is that they are basically borrowed from deterministic optimization techniques. The deterministic objective function value required in the technique is now replaced with an estimate obtained from simulation. By having a reasonably accurate estimate, one hopes that the technique will perform well. Deterministic search techniques include heuristic search, complete enumeration, and random search techniques. Heuristic search technique The heuristic search technique is probably most commonly used in optimizing response surfaces. It is also the least sophisticated scheme mathematically, and it can be thought of as an intuitive and experimental approach. The analyst determines the starting point and stopping rule based on previous experience with the system. After setting the input parameters (factors) to levels that appear reasonable, the analyst makes a simulation run with the factors set at those levels and computes the value of the response function. If it appears to be a maximum (minimum) to the analyst, the experiment is stopped. Otherwise the analyst changes parameter settings and makes another run. This process continues until the analyst believes that the output has been optimized. Suffice it to say that, if the analyst is not intimately familiar with the process being simulated, this procedure can turn into a blind search and can expend an inordinate amount of time and computer resources without producing results commensurate with input. The heuristic search can be ineffective and inefficient in the hand of a novice. Complete enumeration and random techniques The complete enumeration technique is not applicable to continuous cases, but in discrete space v it does yield the optimal value of the response variable. All factors ( v ) must assume a finite number of values for this technique to be applicable. Then, a complete factorial experiment is run. The analyst can attribute some degree of confidence to the determined optimal point when using this procedure. Although the complete enumeration technique yields the optimal point, it has a serious drawback. If the number of factors or levels per factor is large, the number of simulation runs required to find the optimal point can be exceedingly large. For example, suppose that an experiment is conducted with three factors having three, four, and five levels, respectively. Also suppose that five replications are desired to provide the proper degree of confidence. Then 300 runs of the simulator are required to find the optimal point. Hence, this technique should be used only when the number of unique treatment combinations is relatively small or a run takes little time. The random search technique resembles the complete enumeration technique except that one selects a set of inputs at random. The simulated results based on the set that yields the maximum (minimum) value of the response function is taken to be the optimal point. This procedure reduces the number of simulation runs required to yield an optimal result however, there is no guarantee that the point found is actually the optimal point. Of course, the more points selected, the more likely the analyst is to achieve the true optimum. Note that the requirement that each factor assumes only a finite number of values is not a requirement in this scheme. Replications can be made on the treatment combinations selected, to increase the confidence in the optimal point. Which strategy is better, replicating a few points or looking at a single observation on more points, depends on the problem. Response surface search Response surface search attempts to fit a polynomial to J(v). If the design space v is suitably small, the performance function J(v) may be approximated by a response surface, typically a first order, or perhaps quadratic order in v. possibly after transformation, e. g. log ( v ). The response surface method (RSM) requires running the simulation in a first order experimental design to determine the path of steepest descent. Simulation runs made along this path continue, until one notes no improvement in J(v). The analyst then runs a new first order experimental design around the new optimal point reached, and finds a new path of steepest descent. The process continues, until there is a lack of fit in the fitted first order surface. Then, one runs a second order design, and takes the optimum of the fittest second order surface as the estimated optimum. Although it is desirable for search procedures to be efficient over a wide range of response surfaces, no current procedure can effectively overcome non-unimodality (surfaces having more than one local maximum or minimum). An obvious way to find the global optimal would be to evaluate all the local optima. One technique that is used when non-unimodality is known to exist, is called the Las Vegas technique. This search procedure estimates the distribution of the local optima by plotting the estimated J( v ) for each local search against its corresponding search number. Those local searches that produce a response greater than any previous response are then identified and a curve is fitted to the data. This curve is then used to project the estimated incremental response that will be achieved by one more search. The search continues until the value of the estimated improvement in the search is less than the cost of completing one additional search. It should be noted that a well-designed experiment requires a sufficient number of replications so that the average response can be treated as a deterministic number for search comparisons. Otherwise, since replications are expensive, it becomes necessary to effectively utilize the number of simulation runs. Although each simulation is at a different setting of the controllable variables, one can use smoothing techniques such as exponential smoothing to reduce the required number of replications. Pattern search techniques Pattern search techniques assume that any successful set of moves used in searching for an approximated optimum is worth repeating. These techniques start with small steps then, if these are successful, the step size increases. Alternatively, when a sequence of steps fails to improve the objective function, this indicates that shorter steps are appropriate so we may not overlook any promising direction. These techniques start by initially selecting a set of incremental values for each factor. Starting at an initial base point, they check if any incremental changes in the first variable yield an improvement. The resulting improved setting becomes the new intermediate base point. One repeats the process for each of the inputs until one obtains a new setting where the intermediate base points act as the initial base point for the first variable. The technique then moves to the new setting. This procedure is repeated, until further changes cannot be made with the given incremental values. Then, the incremental values are decreased, and the procedure is repeated from the beginning. When the incremental values reach a pre-specified tolerance, the procedure terminates the most recent factor settings are reported as the solution. Conjugate direction search The conjugate direction search requires no derivative estimation, yet it finds the optimum of an N-dimensional quadratic surface after, at most, N-iterations, where the number of iterations is equal to the dimension of the quadratic surface. The procedure redefines the n dimensions so that a single variable search can be used successively. Single variable procedures can be used whenever dimensions can be treated independently. The optimization along each dimension leads to the optimization of the entire surface. Two directions are defined to be conjugate whenever the cross-product terms are all zero. The conjugate direction technique tries to find a set of n dimensions that describes the surface such that each direction is conjugate to all others. Using the above result, the technique attempts to find two search optima and replace the n th dimension of the quadratic surface by the direction specified by the two optimal points. Successively replacing the original dimension yields a new set of n dimensions in which, if the original surface is quadratic, all directions are conjugate to each other and appropriate for n single variable searches. While this search procedure appears to be very simple, we should point out that the selection of appropriate step sizes is most critical. The step size selection is more critical for this search technique because - during axis rotation - the step size does not remain invariant in all dimensions. As the rotation takes place, the best step size changes, and becomes difficult to estimate. Steepest ascent (descent) The steepest ascent (descent) technique uses a fundamental result from calculus ( that the gradient points in the direction of the maximum increase of a function), to determine how the initial settings of the parameters should be changed to yield an optimal value of the response variable. The direction of movement is made proportional to the estimated sensitivity of the performance of each variable. Although quadratic functions are sometimes used, one assumes that performance is linearly related to the change in the controllable variables for small changes. Assume that a good approximation is a linear form. The basis of the linear steepest ascent is that each controllable variable is changed in proportion to the magnitude of its slope. When each controllable variable is changed by a small amount, it is analogous to determining the gradient at a point. For a surface containing N controllable variables, this requires N points around the point of interest. When the problem is not an n-dimensional elliptical surface, the parallel-tangent points are extracted from bitangents and inflection points of occluding contours. Parallel tangent points are points on the occluding contour where the tangent is parallel to a given bitangent or the tangent at an inflection point. Tabu search technique An effective technique to overcome local optimality for discrete optimization is the Tabu Search technique. It explores the search space by moving from a solution to its best neighbor, even if this results in a deterioration of the performance measure value. This approach increases the likelihood of moving out of local optima. To avoid cycling, solutions that were recently examined are declared tabu (Taboo) for a certain number of iterations. Applying intensification procedures can accentuate the search in a promising region of the solution space. In contrast, diversification can be used to broaden the search to a less explored region. Much remains to be discovered about the range of problems for which the tabu search is best suited. Hooke and Jeeves type techniques The Hooke and Jeeves pattern search uses two kinds of moves namely, an exploratory and a pattern move. The exploratory move is accomplished by doing a coordinate search in one pass through all the variables. This gives a new base point from which a pattern move is made. A pattern move is a jump in the pattern direction determined by subtracting the current base point from the previous base point. After the pattern move, another exploratory move is carried out at the point reached. If the estimate of J(v) is improved at the final point after the second exploratory move, it becomes the new base point. If it fails to show improvement, an exploratory move is carried out at the last base point with a smaller step in the coordinate search. The process stops when the step gets small enough. Simplex-based techniques The simplex-based technique performs simulation runs first at the vertices of the initial simplex i. e. a polyhedron in the v - space having N1 vertices. A subsequent simplex (moving towards the optimum) are formed by three operations performed on the current simplex: reflection, contraction, and expansion. At each stage of the search process, the point with the highest J(v) is replaced with a new point foundvia reflection through the centroid of the simplex. Depending on the value of J(v) at this new point, the simplex is either expanded, contracted, or unchanged. The simplex technique starts with a set of N1 factor settings. These N1 points are all the same distance from the current point. Moreover, the distance between any two points of these N1 points is the same. Then, by comparing their response values, the technique eliminates the factor setting with the worst functional value and replaces it with a new factor setting, determined by the centroid of the N remaining factor settings and the eliminated factor setting. The resulting simplex either grows or shrinks, depending on the response value at the new factor settings. One repeats the procedure until no more improvement can be made by eliminating a point, and the resulting final simplex is small. While this technique will generally performance well for unconstrained problems, it may collapse to a point on a boundary of a feasible region, thereby causing the search to come to a premature halt. This technique is effective if the response surface is generally bowl - shaped even with some local optimal points. Probabilistic search techniques All probabilistic search techniques select trial points governed by a scan distribution, which is the main source of randomness. These search techniques include random search, pure adaptive techniques, simulated annealing, and genetic methods. Random search A simple, but very popular approach is the random search, which centers a symmetric probability density function (pdf) e. g. the normal distribution, about the current best location. The standard normal N(0, 1) is a popular choice, although the uniform distribution U-1, 1 is also common. A variation of the random search technique determines the maximum of the objective function by analyzing the distribution of J(v) in the bounded sub-region. In this variation, the random data are fitted to an asymptotic extreme-value distribution, and J is estimated with a confidence statement. Unfortunately, these techniques cannot determine the location of J. which can be as important as the J value itself. Some techniques calculate the mean value and the standard deviation of J(v) from the random data as they are collected. Assuming that J is distributed normally in the feasible region. the first trial, that yields a J-value two standard deviations within the mean value, is taken as a near-optimum solution. Pure adaptive search Various pure adaptive search techniques have been suggested for optimization in simulation. Essentially, these techniques move from the current solution to the next solution that is sampled uniformly from the set of all better feasible solutions. Evolutionary Techniques Nature is a robust optimizer. By analyzing natures optimization mechanism we may find acceptable solution techniques to intractable problems. Two concepts that have most promise are simulated annealing and the genetic techniques. Simulated annealing Simulated annealing (SA) borrows its basic ideas from statistical mechanics. A metal cools, and the electrons align themselves in an optimal pattern for the transfer of energy. In general, a slowly cooling system, left to itself, eventually finds the arrangement of atoms, which has the lowest energy. The is the behavior, which motivates the method of optimization by SA. In SA we construct a model of a system and slowly decrease the temperature of this theoretical system, until the system assumes a minimal energy structure. The problem is how to map our particular problem to such an optimizing scheme. SA as an optimization technique was first introduced to solve problems in discrete optimization, mainly combinatorial optimization. Subsequently, this technique has been successfully applied to solve optimization problems over the space of continuous decision variables. SA is a simulation optimization technique that allows random ascent moves in order to escape the local minima, but a price is paid in terms of a large increase in the computational time required. It can be proven that the technique will find an approximated optimum. The annealing schedule might require a long time to reach a true optimum. Genetic techniques Genetic techniques (GT) are optimizers that use the ideas of evolution to optimize a system that is too difficult for traditional optimization techniques. Organisms are known to optimize themselves to adapt to their environment. GT differ from traditional optimization procedures in that GT work with a coding of the decision parameter set, not the parameters themselves GT search a population of points, not a single point GT use objective function information, not derivatives or other auxiliary knowledge and finally, GT use probabilistic transition rules, not deterministic rules. GT are probabilistic search optimizing techniques that do not require mathematical knowledge of the response surface of the system, which they are optimizing. They borrow the paradigms of genetic evolution, specifically selection, crossover, and mutation. Selection: The current points in the space are ranked in terms of their fitness by their respective response values. A probability is assigned to each point that is proportional to its fitness, and parents (a mating pair) are randomly selected. Crossover: The new point, or offspring, is chosen, based on some combination of the genetics of the two parents. Mutation: The location of offspring is also susceptible to mutation, a process, which occurs with probability p, by which a offspring is replaced randomly by a new offspring location. A generalized GT generates p new offspring at once and kills off all of the parents. This modification is important in the simulation environment. GT are well suited for qualitative or policy decision optimization such as selecting the best queuing disciplines or network topologies. They can be used to help determine the design of the system and its operation. For applications of GT to inventory systems, job-shop, and computer time-sharing problems. GT do not have certain shortcomings of other optimization techniques, and they will usually result in better calculated optima than those found with the traditionally techniques. They can search a response surface with many local optima and find (with a high probability) the approximate global optimum. One may use GT to find an area of potential interest, and then resort to other techniques to find the optimum. Recently, several classical GT principles have been challenged. Differential Evolution. Differential Evolution (DE) is a genetic type of algorithm for solving continuous stochastic function optimization. The basic idea is to use vector differences for perturbing the vector population. DE adds the weighted difference between two population vectors to a third vector. This way, no separate probability distribution has to be used, which makes the scheme completely self-organizing. A short comparison When performing search techniques in general, and simulated annealing or genetic techniques specifically, the question of how to generate the initial solution arises. Should it be based on a heuristic rule or on a randomly generated one Theoretically, it should not matter, but in practice this may depend on the problem. In some cases, a pure random solution systematically produces better final results. On the other hand, a good initial solution may lead to lower overall run times. This can be important, for example, in cases where each iteration takes a relatively long time therefore, one has to use some clever termination rule. Simulation time is a crucial bottleneck in an optimization process. In many cases, a simulation is run several times with different initial solutions. Such a technique is most robust, but it requires the maximum number of replications compared with all other techniques. The pattern search technique applied to small problems with no constraints or qualitative input parameters requires fewer replications than the GT. GT, however, can easily handle constraints, and have lower computational complexity. Finally, simulated annealing can be embedded within the Tabu search to construct a probabilistic technique for global optimization. References Further Readings: Choi D.-H. Cooperative mutation based evolutionary programming for continuous function optimization, Operations Research Letters . 30, 195-201, 2002. Reeves C. and J. Rowe, Genetic Algorithms: Principles and Perspectives . Kluwer, 2002. Saviotti P. (Ed.), Applied Evolutionary Economics: New Empirical Methods and Simulation Techniques . Edward Elgar Pub. 2002. Wilson W. Simulating Ecological and Evolutionary Systems in C . Cambridge University Press, 2000. Stochastic approximation techniques Two related stochastic approximation techniques have been proposed, one by Robbins and Monro and one by Kiefer and Wolfowitz. The first technique was not useful for optimization until an unbiased estimator for the gradient was found. Kiefer and Wolfowitz developed a procedure for optimization using finite differences. Both techniques are useful in the optimization of noisy functions, but they did not receive much attention in the simulation field until recently. Generalization and refinement of stochastic approximation procedures give rise to a weighted average, and stochastic quasi-gradient methods. These deal with constraints, non-differentiable functions, and some classes of non-convex functions, among other things. Kiefer-Wolfowitz type techniques Kiefer and Wolfowitz proposed a finite difference approximation to the derivative. One version of the Kiefer-Wolfwitz technique uses two-sided finite differences. The first fact to notice about the K-W estimate is that it requires 2N simulation runs, where N is the dimension of vector parameter v. If the decision maker is interested in gradient estimation with respect to each of the components of v. then 2N simulations must be run for each component of v. This is inefficient. The second fact is that it may have a very poor variance, and it may result in numerical calculation difficulties. Robbins-Monro type techniques The original Robbins-Monro (R-M) technique is not an optimization scheme, but rather a root finding procedure for functions whose exact values are not known but are observed with noise. Its application to optimization is immediate: use the procedure to find the root of the gradient of the objective function. Interest was renewed in the R-M technique as a means of optimization, with the development of the perturbation analysis, score function (known also as likelihood ratio method), and frequency domain estimates of derivatives. Optimization for simulated systems based on the R-M technique is known as a single-run technique. These procedures optimize a simulation model in a single run simulation with a run length comparable to that required for a single iteration step in the other methods. This is achieved essentially be observing the sample values of the objective function and, based on these observations, updating the values of the controllable parameters while the simulation is running, that is, without restarting the simulation. This observing-updating sequence is done repeatedly, leading to an estimate of the optimum at the end of a single-run simulation. Besides having the potential of large computational savings, this technique can be a powerful tool in real-time optimization and control, where observations are taken as the system is evolving in time. Gradient surface method One may combine the gradient-based techniques with the response surface methods (RSM) for optimization purposes. One constructs a response surface with the aid of n response points and the components of their gradients. The gradient surface method (GSM) combines the virtue of RSM with that of the single - run, gradient estimation techniques such as Perturbation Analysis, and Score Function techniques. A single simulation experiment with little extra work yields N 1 pieces of information i. e. one response point and N components of the gradient. This is in contrast to crude simulation, where only one piece of information, the response value, is obtained per experiment. Thus by taking advantage of the computational efficiency of single-run gradient estimators. In general, N-fold fewer experiments will be needed to fit a global surface compared to the RSM. At each step, instead of using Robbins-Monro techniques to locate the next point locally, we determine a candidate for the next point globally, based on the current global fit to the performance surface. The GSM approach has the following advantages The technique can quickly get to the vicinity of the optimal solution because its orientation is global 23, 39. Thus, it produces satisfying solutions quickly Like RSM, it uses all accumulated information And, in addition, it uses gradient surface fitting, rather than direct performance response-surface fitting via single-run gradient estimators. This significantly reduces the computational efforts compared with RSM. Similar to RSM, GSM is less sensitive to estimation error and local optimality And, finally, it is an on-line technique, the technique may be implemented while the system is running. A typical optimization scheme involves two phases: a Search Phase and an Iteration Phase. Most results in analytic computational complexity assume that good initial approximations are available, and deal with the iteration phase only. If enough time is spent in the initial search phase, we can reduce the time needed in the iteration phase. The literature contains papers giving conditions for the convergence of a process a process has to be more than convergent in order to be computationally interesting. It is essential that we be able to limit the cost of computation. In this sense, GSM can be thought of as helping the search phase and as an aid to limit the cost of computation. One can adopt standard or simple devices for issues such as stopping rules. For on-line optimization, one may use a new design in GSM called single direction design. Since for on-line optimization it may not be advisable or feasible to disturb the system, random design usually is not suitable. Post-solution analysis Stochastic models typically depend upon various uncertain and uncontrollable input parameters that must be estimated from existing data sets. We focus on the statistical question of how input-parameter uncertainty propagates through the model into output - parameter uncertainty. The sequential stages are descriptive, prescriptive and post-prescriptive analysis. Rare Event Simulation Large deviations can be used to estimate the probability of rare events, such as buffer overflow, in queueing networks. It is simple enough to be applied to very general traffic models, and sophisticated enough to give insight into complex behavior. Simulation has numerous advantages over other approaches to performance and dependability evaluation most notably, its modelling power and flexibility. For some models, however, a potential problem is the excessive simulation effort (time) required to achieve the desired accuracy. In particular, simulation of models involving rare events, such as those used for the evaluation of communications and highly-dependable systems, is often not feasible using standard techniques. In recent years, there have been significant theoretical and practical advances towards the development of efficient simulation techniques for the evaluation of these systems. Methodologies include: Techniques based on importance sampling, The restart method, and Hybrid analyticsimulation techniques among newly devised approaches. Conclusions Further Readings With the growing incidence of computer modeling and simulation, the scope of simulation domain must be extended to include much more than traditional optimization techniques. Optimization techniques for simulation must also account specifically for the randomness inherent in estimating the performance measure and satisfying the constraints of stochastic systems. We described the most widely used optimization techniques that can be effectively integrated with a simulation model. We also described techniques for post-solution analysis with the aim of theoretical unification of the existing techniques. All techniques were presented in step-by-step format to facilitate implementation in a variety of operating systems and computers, thus improving portability. General comparisons among different techniques in terms of bias, variance, and computational complexity are not possible. However, a few studies rely on real computer simulations to compare different techniques in terms of accuracy and number of iterations. Total computational effort for reduction in both the bias andvariance of the estimate depends on the computational budget allocated for a simulation optimization. No single technique works effectively andor efficiently in all cases. The simplest technique is the random selection of some points in the search region for estimating the performance measure. In this technique, one usually fixes the number of simulation runs and takes the smallest (or largest) estimated performance measure as the optimum. This technique is useful in combination with other techniques to create a multi-start technique for global optimization. The most effective technique to overcome local optimality for discrete optimization is the Tabu Search technique. In general, the probabilistic search techniques, as a class, offer several advantages over other optimization techniques based on gradients. In the random search technique, the objective function can be non-smooth or even have discontinuities. The search program is simple to implement on a computer, and it often shows good convergence characteristics in noisy environments. More importantly, it can offer the global solution in a multi-modal problem, if the technique is employed in the global sense. Convergence proofs under various conditions are given in. The Hooke-Jeeves search technique works well for unconstrained problems with less than 20 variables pattern search techniques are more effective for constrained problems. Genetic techniques are most robust and can produce near-best solutions for larger problems. The pattern search technique is most suitable for small size problems with no constraint, and it requires fewer iterations than the genetic techniques. The most promising techniques are the stochastic approximation, simultaneous perturbation, and the gradient surface methods. Stochastic approximation techniques using perturbation analysis, score function, or simultaneous perturbation gradient estimators, optimize a simulation model in a single simulation run. They do so by observing the sample values of the objective function, and based on these observations, the stochastic approximation techniques update the values of the controllable parameters while the simulation is running and without restarting the simulation. This observing-updating sequence, done repeatedly, leads to an estimate of the optimum at the end of a single-run simulation. Besides having the potential of large savings in computational effort in the simulation environment, this technique can be a powerful tool in real-time optimization and control, where observations are taken as the system is evolving over time. Response surface methods have a slow convergence rate, which makes them expensive. The gradient surface method combines the advantages of the response surface methods (RSM) and efficiency of the gradient estimation techniques, such as infinitesimal perturbation analysis, score function, simultaneous perturbation analysis, and frequency domain technique. In the gradient surface method (GSM) the gradient is estimated, and the performance gradient surface is estimated from observations at various points, similar to the RSM. Zero points of the successively approximating gradient surface are then taken as the estimates of the optimal solution. GSM is characterized by several attractive features: it is a single run technique and more efficient than RSM at each iteration step, it uses the information from all of the data points rather than just the local gradient it tries to capture the global features of the gradient surface and thereby quickly arrive in the vicinity of the optimal solution, but close to the optimum, they take many iterations to converge to stationary points. Search techniques are therefore more suitable as a second phase. The main interest is to figure out how to allocate the total available computational budget across the successive iterations. For when the decision variable is qualitative, such as finding the best system configuration, a random or permutation test is proposed. This technique starts with the selection of an appropriate test statistic, such as the absolute difference between the mean responses under two scenarios. The test value is computed for the original data set. The data are shuffled (using a different seed) the test statistic is computed for the shuffled data and the value is compared to the value of the test statistic for the original, un-shuffled data. If the statistics for the shuffled data are greater than or equal to the actual statistic for the original data, then a counter c, is incremented by 1. The process is repeated for any desired m number of times. The final step is to compute (c1)(m1), which is the significant level of the test. The null hypothesis is rejected if this significance level is less than or equal to the specified rejection level for the test. There are several important aspects to this nonparametric test. First, it enables the user to select the statistic. Second, assumptions such as normality or equality of variances made for the t-test, ranking-and-selection, and multiple-comparison procedures, are no longer needed. A generalization is the well-known bootstrap technique. What Must Be Done computational studies of techniques for systems with a large number of controllable parameters and constraints. effective combinations of several efficient techniques to achieve the best results under constraints on computational resources. development of parallel and distributed schemesdevelopment of an expert system that incorporates all available techniques. References Further Readings: Arsham H. Techniques for Monte Carlo Optimizing, Monte Carlo Methods and Applications . 4(3), 181-230, 1998. Arsham H. Stochastic Optimization of Discrete Event Systems Simulation, Microelectronics and Reliability . 36(10), 1357-1368, 1996. Fu M. and J-Q. Hu, Conditional Monte Carlo: Gradient Estimation and Optimization Applications . Kluwer Academic Publishers, 1997. Rollans S. and D. McLeish, Estimating the optimum of a stochastic system using simulation, Journal of Statistical Computation and Simulation . 72, 357 - 377, 2002. Rubinstein R. and A. Shapiro, Discrete Event Systems: Sensitivity Analysis and Stochastic Optimization by the Score Function Method . John Wiley Sons, 1993. Metamodeling and the Goal seeking Problems The simulation models although simpler than the real-world system, are still a very complex way of relating input (v) to output J(v). Sometimes a simpler analytic model may be used as an auxiliary to the simulation model. This auxiliary model is often referred to as a metamodel. In many simulation applications such as systems analysis and design applications, the decision maker may not be interested in optimization but wishes to achieve a certain value for J(v), say J 0 . This is the goal-seeking problem. given a target output value J 0 of the performance and a parameterized pdf family, one must find an input value for the parameter, which generates such an output. Metamodeling The simulation models although simpler than the real-world system, are still a very complex way of relating input (v) to output J(v). Sometimes a simpler analytic model may be used as an auxiliary to the simulation model. This auxiliary model is often referred to as a metamodel. There are several techniques available for metamodeling including: design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. Metamodeling may have different purposes: model simplification and interpretation, optimization, what-if analysis, and generalization to models of the same type. The following polynomial model can be used as an auxiliary model. where d v v-v 0 and the primes denote derivatives. This metamodel approximates J(v) for small d v. To estimate J(v) in the neighborhood of v 0 by a linear function, we need to estimate the nominal J(v) and its first derivative. Traditionally, this derivative is estimated by crude Monte Carlo i. e. finite difference which requires rerunning the simulation model. Methods which yield enhanced efficiency and accuracy in estimating, at little additional computational (Not simulation) cost, are presented in this site. The Score Function method of estimating the first derivative is: where Sf(y v) f(y v)d Lnf(y v) dv is the Score function and differentiations is with respect to v, provided that, f(y v) exist, and f(y v) is positive for all v in V. The Score function approach can be extended in estimating the second and higher order of derivatives. For example, an estimate for the second derivative based on the Score Function method is: Where S and H S S 2 are the score and information functions, respectively, widely used in statistics literature, such as in the construction of Cramer-Rao bounds. By having gradient and Hessian in our disposal, we are able to construct a second order local metamodel using the Taylors series. An Illustrative Numerical Example: For most complex reliability systems, the performance measures such as mean time to failure (MTTF) are not available in analytical form. We resort to Monte Carlo Simulation (MCS) to estimate MTTF function from a family of single-parameter density functions of the components life with specific value for the parameter. The purpose of this section is to solve the inverse problem, which deals with the calculation of the components life parameters (such as MTTF) of a homogeneous subsystem, given a desired target MTTF for the system. A stochastic approximation algorithm is used to estimate the necessary controllable input parameter within a desired range of accuracy. The potential effectiveness is demonstrated by simulating a reliability system with a known analytical solution. Consider the coherent reliability sub-system with four components component 1, and 2 are in series, and component 3 and 4 also in series, however these two series of components are in parallel, as illustrated in the following Figure. All components are working independently and are homogeneous i. e. manufactured by an identical process, components having independent random lifetimes Y1, Y2, Y3, and Y4, which are distributed exponentially with rates v v 0 0.5. The system lifetime is Z (Y1,Y2,Y3,Y4 v 0 ) max min (Y3,Y4), min (Y1,Y2). It is readily can be shown that the theoretical expected lifetime of this sub-system is The underlying pdf for this system is: f(y v) v 4 exp(-v S y i ), the sum is over i 1, 2, 3, 4. Applying the Score function method, we have: S(y) f (y v) f(y v) 4v - S y i . the sum is over i 1, 2, 3, 4. H(y) f (y v) f(y v) v 2 ( S y i ) 2 - 8v ( S y i ) 12 v 2 , the sums are over i 1, 2, 3, 4. The estimated average lifetime and its derivative for the nominal system with v v 0 0.5, are: respectively, where Y i, j is the j th observation for the i th component (i 1, 2, 3, 4). We have performed a Monte Carlo experiment for this system by generating n 10000 independent replications using SIMSCRIPT II.5 random number streams 1 through 4 to generate exponential random variables Y1, Y2, Y3, Y4. respectively, on a VAX system. The estimated performance is J(0.5) 1.5024, with a standard error of 0.0348. The first and second derivatives estimates are -3.0933 and 12.1177 with standard errors of 0.1126 and 1.3321, respectively. The response surface approximation in the neighborhood of v 0.5 is: J(v) 1.5024 (v - 0.5) (-3.0933) (v - 0.5) 2 (12.1177)2 6.0589v 2 - 9.1522v 4.5638 A numerical comparison based on exact and the approximation by this metamodel reveals that the largest absolute error is only 0.33 for any v in the range of 0.40, 0.60. This error could be reduced by either more accurate estimates of the derivatives andor using a higher order Taylor expansion. A comparison of the errors indicates that the errors are smaller and more stable in the direction of increasing v. This behavior is partly due to the fact that lifetimes are exponentially distributed with variance 1v. Therefore, increasing v causes less variance than the nominal system (with v 0.50). Goal seeking problem In many systems modeling and simulation applications, the decision maker may not be interested in optimization but wishes to achieve a certain value for J(v), say J 0 . This is the goal-seeking problem. given a target output value J 0 of the performance and a parameterized pdf family, one must find an input value for the parameter, which generates such an output. When is a controllable input, the decision maker may be interested in the goal-seeking problem: namely, what change of the input parameter will achieve a desired change in the output value. Another application of the goal-seeking problem arises when we want to adapt a model to satisfy a new equality constraint with some stochastic functions. We may apply the search techniques, but the goal-seeking problem can be considered as an interpolation based on a meta-model. In this approach, one generates a response surface function for J(v). Finally, one uses the fitted function to interpolate for the unknown parameter. This approach is tedious, time-consuming, and costly moreover, in a random environment, the fitted model might have unstable coefficients. For a given J(v) the estimated d v, using the first order approximation is: provided that the denominator does not vanish for all v 0 in set V. The Goal-seeker Module: The goal-seeking problem can be solved as a simulation problem. By this approach, we are able to apply variance reduction techniques (VRT) used in the simulation literature. Specifically, the solution to the goal-seeking problem is the unique solution of the stochastic equation J(v) - J 0 0. The problem is to solve this stochastic equation by a suitable experimental design, to ensure convergence. The following is a Robbins - Monro (R-M) type technique. where d j is any divergent sequence of positive numbers. Under this conditions, d v J 0 - J(v j ) converges to approach zero while dampening the effect of the simulation random errors. These conditions are satisfied, for example, by the harmonic sequence d j 1j. With this choice, the rate of reduction of di is very high initially but may reduce to very small steps as we approach the root. Therefore, a better choice is, for example d j 9 (9 j). This technique involves placing experiment i1 according to the outcome of experiment i immediately preceding it, as is depicted in the following Figure: Under these not unreasonable conditions, this algorithm will converge in mean square moreover, it is an almost sure convergence. Finally, as in Newtons root-finding method, it is impossible to assert that the method converges for just any initial v v 0 . even though J(v) may satisfy the Lipschits condition over set V. Indeed, if the initial value v 0 is sufficiently close to the solution, which is usually the case, then this algorithm requires only a few iterations to obtain a solution with very high accuracy. An application of the goal-seeker module arises when we want to adapt a model to satisfy a new equality constraint (condition) for some stochastic function. The proposed technique can also be used to solve integral equations by embedding the Importance Sampling techniques within a Monte Carlo sampling. One may extend the proposed methodology to the inverse problems with two or more unknown parameters design by considering two or more relevant outputs to ensure uniqueness. By this generalization we could construct a linear (or even nonlinear) system of stochastic equations to be solved simultaneously by a multidimensional version of the proposed algorithm. The simulation design is more involved for problems with more than a few parameters. References and Further Readings: Arsham H. The Use of Simulation in Discrete Event Dynamic Systems Design, Journal of Systems Science . 31(5), 563-573, 2000. Arsham H. Input Parameters to Achieve Target Performance in Stochastic Systems: A Simulation-based Approach, Inverse Problems in Engineering . 7(4), 363-384, 1999. Arsham H. Goal Seeking Problem in Discrete Event Systems Simulation, Microelectronics and Reliability . 37(3), 391-395, 1997. Batmaz I. and S. Tunali, Small response surface designs for metamodel estimation, European Journal of Operational Research . 145(3), 455-470, 2003. Ibidapo-Obe O. O. Asaolu, and A. Badiru, A New Method for the Numerical Solution of Simultaneous Nonlinear Equations, Applied Mathematics and Computation . 125(1), 133-140, 2002. Lamb J. and R. Cheng, Optimal allocation of runs in a simulation metamodel with several independent variables, Operations Research Letters . 30(3), 189-194, 2002. Simpson T. J. Poplinski, P. Koch, and J. Allen, Metamodels for Computer-based Engineering Design: Survey and Recommendations, Engineering with Computers . 17(2), 129-150, 2001. Tsai C-Sh. Evaluation and optimisation of integrated manufacturing system operations using Taguchs experiment design in computer simulation, Computers And Industrial Engineering . 43(3), 591-604, 2002. What-if Analysis Techniques Introduction The simulation models are often subject to errors caused by the estimated parameter(s) of underlying input distribution function. What-if analysis is needed to establish confidence with respect to small changes in the parameters of the input distributions. However the direct approach to what-if analysis requires a separate simulation run for each input value. Since this is often inhibited by cost, as an alternative, what people are basically doing in practice is to plot results and use a simple linear interpolationextrapolation. This section presents some simulation-based techniques that utilize the current information for estimating performance function for several scenarios without any additional simulation runs. Simulation continues to be the primary method by which system analysts obtain information about analysis of complex stochastic systems. In almost all simulation models, an expectedvalue can express the systems performance. Consider a system with continuous parameter v 206 V, where V is the feasible region. Let be the steady state expected performance measure, where Y is a random vector with known probability density function (pdf), f(y v) depends on v, and Z is the performance measure. In discrete event systems, Monte Carlo simulation is usually needed to estimate J(v) for a given value v. By the law of large numbers where y i . i 1, 2. n are independent, identically distributed, random vector realizations of Y from f (y v ), and n is the number of independent replications. This is an unbiased estimator for J(v) and converges to J(v) by law of large numbers. There are strong motivations for estimating the expected performance measure J(v) for a small change in v to v d v, that is to solve the so-called what if problem. The simulationist must meet managerial demands to consider model validation and cope with uncertainty in the estimation of v. Adaptation of a model to new environments also requires an adjustment in v. An obvious solution to the what if problem is the Crude Monte Carlo (CMC) method, which estimates J(v d v) for each v separately by rerunning the system for each v d v. Therefore costs in CPU time can be prohibitive The use of simulation as a tool to design complex computer stochastic systems is often inhibited by cost. Extensive simulation is needed to estimate performance measures for changes in the input parameters. As as an alternative, what people are basically doing in practice is to plot results of a few simulation runs and use a simple linear interpolationextrapolation. In this section we consider the What-if analysis problem by extending the information obtained from a single run at the nominal value of parameter v to the closed neighborhood. We also present the use of results from runs at two or more points over the intervening interval. We refer to the former as extrapolation and the latter as interpolation by simulation. The results are obtained by some computational cost as opposed to simulation cost . Therefore, the proposed techniques are for estimating a performance measure at multiple settings from a simulation at a nominal value. Likelihood Ratio (LR) Method A model based on Radon-Nikodym theorem to estimate J(v d v) for stochastic systems in a single run is as follows: where the likelihood ratio W is: W f(y v d v) f(y v) adjusts the sample path, provided f(y v) does not vanish. Notice that by this change of probability space, we are using the common realization as J(v). The generated random vector y is roughly representative of Y, with f(v). Each of these random observations, could also hypothetically came from f(v d v). W weights the observations according to this phenomenon. Therefore, the What-if estimate is: which is based on only one sample path of the system with parameter v and the simulation for the system with v d v is not required. Unfortunately LR produces a larger variance compared with CMC. However, since E(W)1, the following variance reduction techniques (VRT) may improve the estimate. Exponential Tangential in Expectation Method In the statistical literature the efficient score function is defined to be the gradient S(y) d Ln f(y v) dv We consider the exponential (approximation) model for J(v d v) in a first derivative neighborhood of v by: J(v d v) E Z(y). exp d vS(y) Eexp( d S(y)) Now we are able to estimate J(v d v) based on n independent replications as follows: Taylor Expansion of Response Function The following linear Taylor model can be used as an auxiliary model. J(v d v) J(v) d v. J (v) . where the prime denotes derivative. This metamodel approximates J(v d v)) for small d v. For this estimate, we need to estimate the nominal J(v) and its first derivative. Traditionally, this derivative is estimated by crude Monte Carlo i. e. finite difference, which requires rerunning the simulation model. Methods which yield enhanced efficiency and accuracy in estimating, at little additional cost, are of great value. There are few ways to obtain efficiently the derivatives of the output with respect to an input parameter as presented earlier on this site. The most straightforward method is the Score Function (SF). The SF approach is the major method for estimating the performance measure and its derivative, while observing only a single sample path from the underlying system. The basic idea of SF is that the derivative of the performance function, J(v), is expressed as expectation with respect to the same distribution as the performance measure itself. Therefore, for example, using the estimated values of J(v) and its derivative J(v), the estimated J(v d v) is: VarJ(v d v) VarJ(v) ( d v) 2 VarJ(v) 2 d v CovJ(v), J(v). This variation is needed for constructing a confidence interval for the perturbed estimate. Interpolation Techniques Given two points, v1 and v2 (scalars only) sufficiently close, one may simulate at these two points then interpolates for any desired points in between. Assuming the given v1 and v2 are sufficiently close and looks for the best linear interpolation in the sense of minimum error on the interval. Clearly, Similar to the Likelihood Ratio approach, this can be written as: where the likelihood ratios W1 and W2 are W1 f(y v) f(y v1) and W2 f(y v) f(y v2), respectively. One obvious choice is f f(y v1) f(y v1)f(y v2). This method can easily extended to k-point interpolation. For 2-point interpolation, if we let f to be constant within the interval 0, 1, then the linear interpolated what-if estimated value is: where the two estimates on the RHS of are two independent Likelihood Ratio extrapolations using the two end-points. We define f as the f in this convex combination with the minimum error in the estimate. That is, it minimizes By the first order necessary and sufficient conditions, the optimal f is: Thus, the best linear interpolation for any point in interval v1, v2 is: which is the optimal interpolation in the sense of having minimum variance. Conclusions Further Readings Estimating system performance for several scenarios via simulation generally requires a separate simulation run for each scenario. In some very special cases, such as the exponential density f(y v)ve - vy. one could have obtained the perturbed estimate using Perturbation Analysis directly as follow. Clearly, one can generate random variate Y by using the following inverse transformation: where Ln is the natural logarithm and U i is a random number distributed Uniformly 0,1. In the case of perturbed v, the counterpart realization using the same U i is Clearly, this single run approach is limited, since the inverse transformation is not always available in closed form. The following Figure illustrates the Perturbation Analysis Method: Since the Perturbation Analysis Approach has this serious limitation, for this reason, we presented some techniques for estimating performance for several scenarios using a single-sample path, such as the Likelihood Ratio method, which is illustrated in the following Figure. Research Topics: Items for further research include: i) to introduce efficient variance reduction and bias reduction techniques with a view to improving the accuracy of the existing and the proposed methods ii) to incorporate the result of this study in a random search optimization technique. In this approach one can generate a number of points in the feasible region uniformly distributed on the surface of a hyper-sphere each stage the value of the performance measure is with a specified radius centered at a starting point. At estimated at the center (as a nominal value). Perturbation analysis is used to estimate the performance measure at the sequence of points on the hyper-sphere. The best point (depending whether the problem is max or min) is used as the center of a smaller hyper - sphere. Iterating in this fashion one can capture the optimal solution within a hyper-sphere with a specified small enough radius. Clearly, this approach could be considered as a sequential self-adaptive optimization technique. iii) to estimate the sensitivities i. e. the gradient, Hessian, etc. of J(v) can be approximated using finite difference. For example the first derivative can be obtained in a single run using the Likelihood Ratio method as follows: the sums are over all i, i 1, 2, 3. n, where The last two estimators may induce some variance reductions. iv) Other interpolation techniques are also possible. The most promising one is based on Kriging. This technique gives more weight to neighboring realizations, and is widely used in geo-statistics. Other items for further research include some experimentation on large and complex systems such as a large Jacksonian network with routing that includes feedback loops in order to study the efficiency of the presented technique. References Further Readings: Arsham H. Performance Extrapolation in Discrete-event Systems Simulation, Journal of Systems Science . 27(9), 863-869, 1996. Arsham H. A Simulation Technique for Estimation in Perturbed Stochastic Activity Networks, Simulation . 58(8), 258-267, 1992. Arsham H. Perturbation Analysis in Discrete-Event Simulation, Modelling and Simulation . 11(1), 21-28, 1991. Arsham H. What-if Analysis in Computer Simulation Models: A Comparative Survey with Some Extensions, Mathematical and Computer Modelling . 13(1), 101-106, 1990. Arsham H. Feuerverger, A. McLeish, D. Kreimer J. and Rubinstein R. Sensitivity analysis and the what-if problem in simulation analysis, Mathematical and Computer Modelling . 12(1), 193-219, 1989. PDF Version The Copyright Statement: The fair use, according to the 1996 Fair Use Guidelines for Educational Multimedia. of materials presented on this Web site is permitted for non-commercial and classroom purposes only. This site may be mirrored intact (including these notices), on any server with public access. All files are available at home. ubalt. eduntsbarshBusiness-stat for mirroring. Kindly e-mail me your comments, suggestions, and concerns. Vielen Dank. This site was launched on 2111995, and its intellectual materials have been thoroughly revised on a yearly basis. The current version is the 9 th Edition. All external links are checked once a month. EOF: 211 1995-2015.What is a Filter In a modern control system, a filter is an algorithm (or function block) used mainly for the reduction of noise on a process measurement signal (Figure 1). But that is not its only use as we will see later. Figure 1. Noise filter. Types of Filters Control systems generally provide first-order lag andor moving-average filters. A few control systems provide higher-order filters. The different types of filters are briefly discussed below. First-Order Lag Filter The most common type of filter is the first-order lag filter of which the output approaches the value of the input in an exponential way over time (Figure 2). This is also called a low-pass filter because high-frequencies (fast changes) are attenuated and low-frequencies (slow changes) are passed through. This makes the first-order lag filter ideal to reduce the noise component in a process measurement signal because noise tends to be of higher frequency than process changes. Figure 2. Response of a 20-second first-order lag filter to a step-change in its input. The time constant of a first-order lag filter is the time it takes for its output to change 63.2 of a sustained change on its input (Figure 2). Moving-Average Filter Another type of filter is the moving average filter. This type of filter stores a number of samples in a first-in-first-out buffer. On every execution cycle a new value from the filters input is stored in the buffer and the oldest value is discarded. The filter then calculates the average of all the stored values, which then becomes the new output of the filter, as depicted in Figure 3. Figure 3. Moving-average filter. The output of a moving average filter approaches the final value linearly and then comes to an abrupt stop, as opposed to a first-order lag that approaches the final value exponentially (Figure 4). Figure 4. Output of a moving-average filter compared to that of a first-order lag filter. Higher-Order Filters Higher-order filters consist of multiple lags and leads arranged in a specific way to provide a steeper cut-off or just filter out specific frequencies (such as 60 Hz). Although these filters are much more common in the electronics industry, some control systems also provide at least a subset of them. Higher-order filter types include low-pass, band-pass, notch, and high-pass (although the latter would be very uncommon in process control applications. There probably are situations in which the use of higher-order filters would be preferable over simple first-order lag filters, but for most cases in general process control, first-order lag filters are adequate for smoothing noisy process measurement signals. Since higher-order filters are rarely used in control loops, I will not delve deeper into their design and application here. Should I Use a First-Order or a Moving-Average Filter If you have a choice, do not use a moving-average filter for smoothing a noisy process measurement. A first-order lag filter is better suited for smoothing out noise. The reason is as follows: With a first-order lag filter, newly sampled values contribute more to the output than older samples the newest value contributes the most and the contribution of older samples decreases exponentially over time. With a moving-average filter, all values in the buffer contribute equally to the output. If there is a spike on the input of a moving-average filter, its contribution stays unchanged until it suddenly disappears when the value drops off the buffer. With a lag filter, the contribution of the spike will wane over time. Filter Applications There are three main applications for filters in control systems. These are discussed below. Noise Filter Also called a smoother, noise filters are used to smooth out high-frequency noise from a process measurement signal as depicted in Figure 1. These filters are commonly applied to flow-measurement signals, because of the tendency of these signals to have a substantial noise component. A first-order lag filter with a time constant of two to three seconds is normally sufficient for a flow control loop. Longer time constants may be used if needed, but be careful that the filter does not become the dominant lag in the loop. Some level measurements can also have a large noise component, e. g. where boiling or liquid-gas separation affects the level. Level controllers (except on steam drums and surge tanks) often require a high controller gain, making the controller output very sensitive to noise. In these cases filters with longer time constants (e. g. 10 to 20 seconds) may be required. An appropriate filter time constant (Tf) can be calculated as follows: Tf (Amplitude of Noise) (Desired Amplitude after Filtering) (Period of Noise) (2 x PI) Where PI 3.14 and the period of the noise can be determined by counting the number of peaks in a signal over one minute, and then inverting this number, i. e. use 1x. The equation above will then give you the filter time in minutes. Convert this number to seconds (multiply by 60) if your control system uses seconds as the time unit for filters. Note that adding a filter in a control loop or changing the filters time constant changes the dynamic behavior of the control loop. This requires retuning the controller to accommodate the loops new dynamics. Also, use the minimum filtering possible, because a filter introduces lag which will likely result in a slower-performing control loop, and it may hide process problems. Anti-Aliasing Filter In process control, anti-aliasing filters are used on analog input signals to remove high-frequency components from the signals before they are sampled by the digital control system. This is done to prevent aliasing problems in which high-frequency components in the original signal appear as low-frequency aliases after sampling by the control system. YouTube has some nice videos demonstrating aliasing. Anti-alias filtering must be done in the transmitter, i. e. before the analog signal is sampled by the AD converter in the control systems input module. The anti-aliasing filter should provide a minimum of -12 dB of attenuation at the Nyquist frequency, but preferably more as explained in a 1994 paper by EnTech. This can be provided by a first-order low-pass filter with a time constant set to at least 1.3 times the slowest sampling period. For example, if the input card samples the analog inputs at a rate of 1 sample per 500 milliseconds, and the controller execution interval is 1 second, a minimum filter time constant of 1.3 seconds should be used. Setpoint Filter A setpoint filter passes the control loops setpoint through a first-order lag filter before the controller receives the signal. A setpoint filter can be used to reduce or eliminate overshoot on control loops that receive operator-made setpoint changes. This will mostly apply to lag dominant processes that have been tuned for fast disturbance rejection. It can also be used to reduce the amount of abrupt control action as a result of the setpoint change. (However, my preferred solution in both cases is to apply the proportional and derivative modes only on the process variable instead of on error, if the control algorithm supports this.) Figure 5. Effect of a standard setpoint change versus setpoint filtering. Control guru Greg McMillan recommends that the setpoint filters time constant be set equal to the integral time in the controller, or 1.5 times the integral time if the controller is tuned more aggressively for minimum settling time. Setpoint filters should never be used in control loops that are required to closely track their setpoints (such as in cascade, feedforward, and ratio control) because it slows down the controllers response to setpoint changes. Final Words Filters are handy devices in control systems and have multiple uses, the main one being to reduce the noise component on measurement signals. Only use filters when needed, and then as little filtering as possible. And remember that a filter alters the dynamic response of the loop (except for setpoint filters), so the controller has to be retuned after a filters time constant has been changed.


No comments:

Post a Comment