{"id":1812,"date":"2018-12-05T08:52:28","date_gmt":"2018-12-05T08:52:28","guid":{"rendered":"http:\/\/www.gyanvihar.org\/journals\/?p=1812"},"modified":"2019-06-12T05:48:49","modified_gmt":"2019-06-12T05:48:49","slug":"efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system","status":"publish","type":"post","link":"https:\/\/www.gyanvihar.org\/journals\/efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system\/","title":{"rendered":"Efficient fault tolerant Scheduling techniques And Backup Overloading techniques For Real Time System"},"content":{"rendered":"<p style=\"text-align: center\"><strong><sup>1<\/sup>Vipinder Bagga,\u00a0 <sup>2<\/sup>Akhilesh Pandey<\/strong><\/p>\n<p style=\"text-align: center\"><sup>1<\/sup>Suresh Gyan Vihar University, Jaipur<\/p>\n<p style=\"text-align: center\"><sup>2<\/sup>Suresh Gyan Vihar University, Jaipur<\/p>\n<p style=\"text-align: justify\"><strong>Abstract: To provide the performance analysis of off-line scheduling algorithms which address the issues of fault tolerance, reliability, real-time, task precedence constraints and heterogeneity in real time systems. The performance measures of these algorithms on which they are differentiated on each other are performance, reliability, schedulability. To compare the performance of backup overloading techniques based on fault tolerance dynamic scheduling algorithms in real time systems. These techniques are measured on the basis of fault rate, task load, task laxity and time to second failure.<\/strong><\/p>\n<p><strong>Keyword: <\/strong><em>Fault tolerant, Real Time System, Scheduling.<\/em><\/p>\n<p>&nbsp;<\/p>\n<p>INTRODUCTION<\/p>\n<p style=\"text-align: justify\">Real-time computer systems are required to produce correct results not only in their values but also in the times at which the results are produced better known as timeliness. In this systems, the execution times of the real-time programs are usually constrained by predefined time-bounds that together satisfy the timeliness requirements. The trademark of real-time systems is that their tasks have deadlines and missing too many such deadlines in a row can result in catastrophic failure. As a result, much effort has been devoted in recent years to develop techniques by which to deliver such systems highly reliable. These efforts have generally involved the use of massive hardware redundancy, i.e., of using many more processors than are absolutely necessary to ensure that enough will remain alive, despite failures, to continue providing acceptable levels of service. A failure in a real-time system can result in severe consequences. Such critical systems should be designed a priori to satisfy their timeliness and reliability requirements. To guarantee the timeliness, one can estimate the worst-case execution times for the real-time programs and check whether they are schedulable, assuming correct execution times. Despite such design, failures can still occur at runtime due to unexpected system or environment behaviors. Fault-tolerant deals with building computing systems that continue to operate sufficiently in the presence of faults. A fault-tolerant system may be able to tolerate one or more fault-types including \u2013 (i) Transient, intermittent or permanent hardware faults, Software and hardware design errors, (iii) Operator errors (iv) Physical damage.<\/p>\n<p style=\"text-align: justify\">An extensive approach has been developed in this field over the past thirty years, and a number of fault-tolerant machines have been developed &#8211; most of them dealing with random hardware faults, while a smaller number deal with software, design and operator faults to uniform degrees. A large amount of supporting research has been reported. Fault tolerance and dependable systems research covers a wide spectrum of applications ranging across embedded\u00a0real-time systems, commercial transaction systems, transportation Systems, and military\/space systems &#8211; to name a few. The supporting research includes system architecture, real-time processing, design techniques, coding theory, testing, validation, and proof of correctness, modeling, operating systems, parallel processing, and software reliability.<\/p>\n<p style=\"text-align: justify\">These areas often involve widely diverse core expertise ranging from formal logic, mathematics of conditional modeling, software engineering, and hardware design and graph theory. Redundancy has long been used in fault-tolerant systems. However, redundancy does not inherently make a system fault-tolerant and adaptive; it is necessary to employ fault-tolerant methods by which the system can tolerate hardware component failures, avoid or predict timing failures, and be reconfigured with little or graceful decline in terms of reliability and functionality.<\/p>\n<p style=\"text-align: justify\">Early error detection is clearly important for real-time systems; error is an abstract for erroneous system state, the observable result of a failure. The error detection suspension of a system is the interval of time from the instant at which the system enters an erroneous state to the instant at which that state is detected. Keeping the error detection suspension small provides a better chance to recover from component failures and timing errors, and to display graceful reconfiguration. However, a small suspension alone is not sufficient; fault-tolerant methods need to be provided with sufficient information about the data processing underway in order to take appropriate action when an error is detected. Such information can be obtained during system design and implementation. In current practice, the design and implementation for real-time systems often does not sufficiently address fault tolerance and adaptiveness issues. The performance of a real time system can be improved by proper task allocation and an effective uniprocessor scheduling. In this thesis a brief study of the existing uniprocessor scheduling schemes and backup overloading techniques has been presented and work has also been done to find an appropriate schemes for allocation and scheduling in real time system.<\/p>\n<p>The fault tolerant scheduling problem is defined as follows:<\/p>\n<p>The fault tolerant scheduling requirements are given as follows:<\/p>\n<ol>\n<li>Each task is executed by one processor at a time and each processor executes one task at a time.<\/li>\n<li>All periodic tasks should meet their deadlines.<\/li>\n<li>Maximize the number of processor failures to be tolerated.<\/li>\n<li>For each task Ti, the primary task PriTi or the backup BackTi is assigned to only one processor for the duration of ci and can be preempted once it starts, if there is a task with early deadline than the presently executed task.<\/li>\n<\/ol>\n<p style=\"text-align: justify\">Fault tolerance mechanism involves no of following steps, each step is\u00a0associated with specific functioning hence they can be applied\u00a0independently in the process of fault handling and the life cycle of\u00a0fault handling is illustrated in Figure1.<br \/>\n\u2022 Fault Detection: &#8211; One of the most important aspects of\u00a0fault handling in real time systems is to detect a fault\u00a0immediately and isolate it to the appropriate unit as quickly\u00a0as possible [3]. In system there is no central point for\u00a0lookout from which the entire system can be observed at\u00a0once [9] hence fault detection remains a key issue in real<br \/>\ntime system. [16] Reveals the impact of faster fault\u00a0detectors in Real Time System. Design goals and\u00a0architecture for fault detection in grid has been discussed in\u00a0[13, 42]. Commonly used fault detection techniques are\u00a0Consensus, Deviation Alarm and Testing.<br \/>\n\u2022 Fault Diagnosis:-Figure out where the fault is and what\u00a0caused the fault for example Voter in TMR can indicate\u00a0which module failed and Pinpoint can identify failed\u00a0components.<br \/>\n\u2022 Fault Isolation:-If a unit is actually faulty, many fault\u00a0triggers will be generated for that unit. The main objective\u00a0of fault isolation is to correlate the fault triggers and identify\u00a0the faulty unit and then confine the fault to prevent infection\u00a0i.e. prevent it to propagate from its point of origin.<br \/>\n\u2022 Fault Masking:-Ensuring that only correct value get passed\u00a0to the system boundary inspite of a failed component.<br \/>\n\u2022 Fault Repair\/Recovery:-A process in which faults are\u00a0removed from the system. Fault Repair\/Recovery\u00a0Techniques can be of the following types include\u00a0Checkpoint and Rollback.<br \/>\n\u2022 Fault Compensation:-If a fault occurs and is confined to a\u00a0subsystem, it may be necessary for the system to provide a\u00a0response to compensate for output of the faulty subsystem.\u00a0During fault tolerance all of the above steps may not be involved<\/p>\n<p>1.2.1 Fault Types<br \/>\nThere are three types of faults:<br \/>\n\u2022 Permanent,<br \/>\n\u2022 intermittent,<br \/>\n\u2022 transient.<br \/>\nA permanent fault does not die away with time, but remains until it is\u00a0repaired. This is an intermittent fault cycle between the fault\u2013active\u00a0and fault benign states. A transient fault dies away after some time.<\/p>\n<p style=\"text-align: justify\">I. PAST STDUY<br \/>\nReal-time systems are computerized systems with timing constraints.\u00a0Real-time systems can be classified as hard real-time systems and soft\u00a0real-time systems. In hard real-time systems, the consequences of\u00a0missing a task deadline may be catastrophic. In soft real-time systems,\u00a0the consequences of missing a deadline are relatively milder.\u00a0Examples of hard real-time systems are space applications, fly-bywire\u00a0aircraft, radar for tracking missiles, etc. Examples of soft realtime\u00a0systems are on-line transaction used in airline reservation\u00a0systems, multimedia systems, etc. This thesis deals with scheduling of\u00a0periodic tasks in hard real-time systems. Applications of many hard<br \/>\nreal-time systems are often modeled using recurrent tasks. For\u00a0example, real-time tasks in many control and monitoring applications\u00a0are implemented as recurrent periodic tasks. This is because periodic\u00a0execution of recurrent tasks is well understood and predictable. The\u00a0most relevant real-time task scheduling concepts are: periodic task\u00a0system, ready (active) task, and task priority, and preemptive\u00a0scheduling algorithm, feasibility condition of a scheduling algorithm,\u00a0offline and online scheduling. 2.1.1 Types of Real-Time Systems\u00a0Consider two categories into which real-time systems can be\u00a0classified.<\/p>\n<ul>\n<li>Hard real-time systems are those whose failure (triggered by\u00a0missing too many hard deadlines) leads to\u00a0 at astrophic. For\u00a0example, if the computer on a fly-by-wire aircraft fails\u00a0completely, the aircraft crashes. If a robot carrying out remotely\u00a0commanded surgery misses a deadline to stop cutting in a\u00a0delivers too high a dose of radiation (by not switching off the\u00a0beam at the right time), the patient can die.<\/li>\n<li>In a soft real-time system, missing any number of deadlines may\u00a0be a cause of user annoyance; however, the outcome is not\u00a0catastrophic. A server telling you the latest score in a cricket\u00a0match may cause some users great distress by freezing at the\u00a0most exciting point of the match, but that is not a catastrophe. An\u00a0airline reservation system may take a very long time to respond,\u00a0\u00a0driving its more impatient customers away, but that may still not<br \/>\nbe classified as a catastrophe.<\/li>\n<li>Thus, the classification of real-time systems into hard and soft\u00a0depends on the application, not on the computer itself. The same type\u00a0of computer, running similar software, may be classified in one\u00a0context as soft while it is hard in another. For example, consider a\u00a0video or audio-only conferencing application. When used for routine\u00a0business communications, it can be considered a soft real-time\u00a0system. If, on the other hand, it is used by police officers and\u00a0firefighters to coordinate actions at the scene of a major fire, it is a\u00a0hard real-time system. Or, take a real-time database system. When\u00a0used to provide sports scores (as we have seen), it is soft; however, if\u00a0it is used to provide stock market data, an outage can cause significant\u00a0losses and may be regarded by its users as catastrophic. In this case, it\u00a0would be considered a hard real-time system. Perhaps a good rule of\u00a0thumb is to say that the designers of hard realtime systems expend a\u00a0significant fraction of the development and test time ensuring that task\u00a0deadlines are met to a very high probability; by contrast, soft real-time\u00a0systems are really of the \u201cbest effort\u201d variety, in which the system\u00a0makes an effort to meet deadlines, but not more than that.\u00a0Note that the same computer system may run both hard and soft tasks.\u00a0In other words, some of its workload may be critical and some may\u00a0not be.<br \/>\nMost of the applications for which one requires fault-tolerant\u00a0scheduling require hard real-time computers. Such systems run two\u00a0types of tasks: periodic and a periodic.<br \/>\n\u2022 As the term implies, a periodic\u00a0 task, Ti, is issued once every\u00a0period of Pi seconds. Typically (but not always), the deadline of\u00a0a periodic task is equal to its period: most of the results in\u00a0realtime scheduling are based on assumption.<br \/>\n\u2022 An aperiodic task, can be released at any time; however,\u00a0specifications may limit their rate of arrival to no more than one\u00a0every \u03c4 seconds.<\/li>\n<\/ul>\n<p style=\"text-align: justify\">The majority of real-time systems are extremely simple, many built\u00a0out of primitive eight-bit processors. It is not uncommon for such\u00a0processors to have either no pipeline (meaning that they do not\u00a0overlap instruction execution) or a very simple pipeline (lacking such\u00a0features as out-of-order execution or branch prediction); most have no\u00a0caches. Fault-tolerance features in many such applications are early at\u00a0best.<br \/>\nIn many important applications, however, the real-time system\u00a0carries a heavy workload and is in the control of significant physical\u00a0systems. Such systems have been used in aerospace applications for\u00a0many years; more recently, they have begun to proliferate in other\u00a0hard real-time applications as well. These include industrial robots,\u00a0chemical plants, cars, and mechanisms for remote surgery. It is these\u00a0types of application for which the fault-tolerant scheduling approaches\u00a0that we survey here are meant.<\/p>\n<p style=\"text-align: justify\">2.2 Periodic Task Systems<br \/>\nThe basic component of scheduling is a task. A task is a unit of work\u00a0such as a program or block of code that when executed provides some\u00a0service of an application. Examples of task are reading sensor data, a\u00a0unit of data processing and transmission, etc. A periodic task system is\u00a0set of tasks in which each task is characterized by a period, deadline\u00a0and worst-case execution time (WCET).<\/p>\n<p><strong>Period:<\/strong> Each task in a periodic task system has an inter-arrival time of\u00a0occurrence, called the period of the task. In each period, a job of the\u00a0task is released. A job is ready to execute at the beginning of each\u00a0period, called the released time, of the job.<\/p>\n<p style=\"text-align: justify\">Deadline: Each job of a task has a relative deadline that is the time by\u00a0which the job must finish its execution relative to its released time.\u00a0The relative deadlines of all the jobs of a particular periodic task are\u00a0same. The absolute deadline of a job is the time instant equal to\u00a0released time plus the relative deadline.<br \/>\n<strong>WCET:<\/strong> Each periodic task has a WCET that is the maximum\u00a0execution time that that each job of the task requires between its\u00a0released time and absolute deadline.\u00a0If relative deadline of each task in a task set is less than or equal to its\u00a0period, then the task set is called a constrained deadline periodic task\u00a0system. If the relative deadline of each task in a constrained deadline\u00a0task set is exactly equal to its period, then the task set is called an\u00a0implicit deadline periodic task system. If a periodic task system is\u00a0neither constrained nor implicit, then it is called an arbitrary deadline\u00a0periodic task system. i.e\u00a0Relative deadline of task set &lt;= period (constrained deadline periodic\u00a0task system)<br \/>\nRelative deadline of constrained deadline task set = period (implicit\u00a0deadline periodic task system.<\/p>\n<p style=\"text-align: justify\">2.3 Task Independence<br \/>\nThe tasks of real-time applications may be dependent on one another,\u00a0for example, due to resource or precedence constraints. If a resource is\u00a0shared among multiple tasks, then some tasks may be blocked from\u00a0being executed until the shared resource is free. Similarly, if tasks\u00a0have precedence constraints, then one task may need to wait until\u00a0another task finishes its execution. The only resource the tasks share is\u00a0the processor platform.<br \/>\n2.1.3 Ready Tasks<br \/>\nIn a periodic task system, job of a task is released in each period of the\u00a0task. All jobs that are released but have not completed their individual\u00a0execution by a time instant t are in the set of ready (active) tasks at\u00a0time t. Note that, there may be no job in the set of ready tasks at one\u00a0time instant or there may be a job of all the tasks in the set of ready\u00a0tasks at another time instant.<\/p>\n<p style=\"text-align: justify\">2.4 Task Priority<br \/>\nWhen two or more ready tasks compete for the use of the processor,\u00a0some rules are applied to allocate the use of processor(s). This set of\u00a0rules is governed by the priority discipline. The selection (by the\u00a0runtime dispatcher) of the ready task for execution is determined by\u00a0the priorities of the tasks. The priority of a task can be static or\u00a0dynamic.<\/p>\n<p style=\"text-align: justify\">\u2022 Static Priority: In static (fixed) priority, each task has a priority\u00a0that never changes during run time. The different jobs of the\u00a0same task have the same priority relative to any other tasks. For\u00a0example, according to Liu and Layland, the well known RM\u00a0scheduling algorithm assigns static priorities to tasks such that\u00a0the shorter the period of the task, the higher the priority [2].<\/p>\n<p style=\"text-align: justify\">\u2022 Dynamic Priority: In dynamic priority, different jobs of a task\u00a0may have different priorities relative to other tasks in the system.\u00a0In other words, if the priority of jobs of the task changes from\u00a0one execution to another, then the priority discipline is dynamic.\u00a0For example, the well known Earliest-Deadline-First (EDF)\u00a0scheduling algorithm assigns dynamic priorities to tasks such\u00a0that a ready task whose absolute deadline is the nearest has the\u00a0highest priority [2].<\/p>\n<p style=\"text-align: justify\">2.5 Preemptive Scheduling<br \/>\nA scheduling algorithm is preemptive if the release of a new job of a\u00a0higher priority task can preempt the job of a currently running lower\u00a0priority task. During runtime, task scheduling is essentially\u00a0determining the highest priority active tasks and executing them in the\u00a0free processor. For example, RM and EDF are examples of\u00a0preemptive scheduling algorithm.\u00a0Under non-preemptive scheme, a currently executing task always\u00a0completes its execution before another ready task starts execution.\u00a0Therefore, in non-preemptive scheduling a higher priority ready task\u00a0may need to wait in the ready queue until the currently executing task\u00a0(may be of lower priority) completes its execution. This will result in\u00a0worse schedulability performance than for the pre-emptive case.<\/p>\n<p style=\"text-align: justify\">2.6 Work-Conserving Scheduling<br \/>\nA scheduling algorithm is called work conserving if it never idles a\u00a0processor whenever there is a ready task awaiting execution on that\u00a0processor. A work conserving scheduler guarantees that whenever a\u00a0job is ready to execute and the processor is free for executing the job\u00a0is available, the job will be dispatched for execution. For example,\u00a0scheduling algorithms RM and EDF are work-conserving by\u00a0definition.\u00a0A non work conserving algorithm may decide not to execute any task\u00a0even if there is a ready task awaiting execution. If the processor\u00a0should be idled when there is a ready task awaiting execution, then the\u00a0non work-conserving scheduling algorithm requires information about\u00a0all tasks parameters in order to make the decision when to idle the\u00a0processor. Online scheduling algorithms typically do not have\u00a0clairvoyant information about all the parameters of all future tasks,\u00a0which means such algorithms are generally work conserving.<\/p>\n<p>2.7 Feasibility and Optimality of Scheduling<br \/>\nTo predict the temporal behavior and to determine whether the timing\u00a0constraints of an application tasks will be met during runtime,\u00a0feasibility analysis of scheduling algorithm is conducted. If a\u00a0scheduling algorithm can generate a schedule for a given set of tasks\u00a0such that all tasks meet deadlines, then the schedule of the task set is\u00a0feasible. If the schedule of a task set is feasible using a scheduling\u00a0algorithm A, we say that the task set is A-schedulable. A scheduling\u00a0algorithm is said to be optimal, if it can feasibly schedule a task set\u00a0whenever some other algorithm can schedule the same task set under\u00a0the same scheduling policy (with respect to for example, priority\u00a0assignment, preemptivity, migration, etc.). For example, Liu and\u00a0Layland [2] showed that the RM and EDF are optimal uniprocessor\u00a0scheduling algorithm for static and dynamic priority, respectively.\u00a0Feasibility Condition (FC)\u00a0For a given task set, it is computationally impractical to simulate the\u00a0execution of tasks at all time instants to see in offline whether the task\u00a0set will be schedulable during runtime. To address this problem,\u00a0feasibility conditions for scheduling algorithms are derived. A\u00a0feasibility condition is a set of condition(s) that are used to determine\u00a0whether a task set is feasible for a given scheduling algorithm. The\u00a0feasibility condition can be necessary and sufficient or it can be\u00a0sufficient only.<\/p>\n<p style=\"text-align: justify\">Necessary and Sufficient FC (Exact test): A task set will meet all its\u00a0deadlines if, and only if, it passes the exact test. If the exact FC of a\u00a0scheduling algorithm A is satisfied, then the task set is Aschedulable.\u00a0Conversely, if the task set is A-schedulable, then the exact FC of\u00a0algorithm A is satisfied. There-fore, if the exact FC of a task set is not\u00a0satisfied, then it is also true that the scheduling algorithm can not\u00a0feasibly schedule the task set.\u00a0Sufficient FC: A task set will meet all its deadlines if it passes the\u00a0sufficient test. If the sufficient FC of a scheduling algorithm A is\u00a0satisfied, then the task set is A-schedulable. However, the converse is\u00a0not necessarily true. Therefore, if the sufficient FC of a task set is not\u00a0satisfied, then the task set may or may not be schedulable using the\u00a0scheduling algorithm.<\/p>\n<p style=\"text-align: justify\">2.8 Minimum Achievable Utilization<br \/>\nA processor platform is said to be fully utilized when an increase in\u00a0the data processing time of any of the tasks in a task set will make the\u00a0task set unschedulable on the platform. The least upper bound of the\u00a0total utilization is the minimum of all total utilizations over all the sets\u00a0of tasks that fully utilize the processor platform. This least upper<br \/>\nbound of a scheduling algorithm is called the minimum achievable\u00a0utilization or utilization bound of the scheduling algorithm. A\u00a0scheduling algorithm can feasibly schedule any set of tasks on a\u00a0processor platform if the total utilization of the tasks is less than or\u00a0equal to the mini-mum achievable utilization of the scheduling\u00a0algorithm.<\/p>\n<p style=\"text-align: justify\">II. IMPLEMENTATION<br \/>\nThis experiment evaluates performance in terms of schedulability\u00a0among the four algorithms, namely, OV, FGLS, FRCD and eFRCD,\u00a0using the SC measure. The workload consists of sets of independent\u00a0real-time tasks that are to be executed on a homogeneous distributed\u00a0system. The size of the homogeneous system is fixed at 20, and a\u00a0common deadline of 100 is selected. The failure rates are uniformly\u00a0selected from the range between 0.5*10-6\u00a0and 3.0*10-6\u00a0. Execution\u00a0time is a random variable uniformly distributed in the range [1, 20].\u00a0Schedulability is first measured as a function of task set size as shown\u00a0in Fig. 6.<br \/>\nFig. 6 shows that the SC performances of OV and eFRCD are almost\u00a0identical, and so are FGLS and FRCD. Considering that eFRCD had\u00a0to be downgraded for comparability, this result should imply that\u00a0eFRCD is more powerful than OV, because eFRCD can also schedule\u00a0tasks with precedence constraints to be executed on heterogeneous\u00a0systems, which OV is not capable of. The results further reveal that\u00a0both OV and eFRCD significantly outperform FGLS and FRCD in\u00a0SC, suggesting that both FGLS and FRCD are not suitable for\u00a0scheduling independent tasks. The poor performance of FGLS and\u00a0FRCD can be explained by the fact that they do not employ the BOV\u00a0scheme. The consequence is twofold. First, FGLS and FRCD require\u00a0more computing resources than eFRCD, which is likely to lead to a\u00a0relatively low SC when the number of processors is fixed. Second, the\u00a0backup copies in FGLS and FRCD cannot overlap with one another\u00a0on the same processor, and this may result in a much longer schedule\u00a0length. The number of Deadline= 100, and processor m=16 as\u00a0constant for all the four algorithms.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"size-full wp-image-1857 aligncenter\" src=\"http:\/\/www.gyanvihar.org\/journals\/wp-content\/uploads\/2018\/12\/Capture-104.jpg\" alt=\"\" width=\"266\" height=\"474\" \/><\/p>\n<p style=\"text-align: justify\">In this paper, an efficient fault-tolerant and real-time scheduling\u00a0algorithm and overloading techniques for real time systems. To the\u00a0best of our knowledge, the scheduling algorithms comprehensively\u00a0address the issues of reliability, real-time, task precedence constraints,\u00a0fault-tolerance, and heterogeneity. The performance analysis of OV\u00a0[12], FGLS [5], FRCD[15] and eFRCD[9] are analyzed and found that\u00a0eFRCD is the most relevant scheduling algorithm. The analysis also\u00a0indicates that eFRCD is the superior to the rest of algorithms.\u00a0PB-Overloading technique uses dynamic grouping and is defined as\u00a0the process of dynamically dividing the processors of the system into\u00a0logical groups as tasks arrive into the system and finish executing. It\u00a0is flexible in nature because we can take n number of processors and\u00a0groups to use this overloading technique. BB-Overloading technique\u00a0uses static grouping and where a processor can be a member of only\u00a0one group. In PB-Overloading technique a processor can be a member\u00a0of more than one group which allows efficient use of backup\u00a0overloading. It also offers better schedulability than BB-overloading\u00a0technique due to its flexible nature of overloading backups. It is also\u00a0analyzed that PB-overloading offer higher fault-tolerance degree than\u00a0the BB-Overloading due to its dynamic nature of forming the groups.\u00a0The primary backup overloading offers better schedulability and\u00a0reliability to the system than BB-Overloading<\/p>\n<p style=\"text-align: justify\"><strong>References<\/strong><br \/>\n[1] S. Lauzac, R. Melhem, and D. Moss\u00e9. An Efficient RMS\u00a0Control and Its Application to Multiprocessor Scheduling.<br \/>\nIn Proceedings of the InternationalParallel Processing\u00a0Symposium, pages 511\u2013518, 1998.<br \/>\n[2] C. L. Liu and J. W. Layland. Scheduling Algorithms for\u00a0Multiprogramming in a Hard-Real-Time Environment.<br \/>\nJournal of the ACM, 20(1):46 \u2013 61, 1973.<br \/>\n[3] B. Andersson, S. Baruah, and J. Jonsson. Static-Priority\u00a0Scheduling on Multiprocessors. In Proceedings of the IEEE\u00a0Real-Time Systems Symposium, pages 193\u2013202, 2001.<br \/>\n[4] Viacheslav izosimov. Scheduling and Optimization of Fault\u00a0Tolerant embedded Systems,\u00a0Ph.D.Thesis,Linkopings university, November\u00a02006.<br \/>\n[5] A. Girault, C. Lavarenne, M. Sighireanu and Y. Sorel,\u00a0\u201cFault-Tolerant Static Scheduling for\u00a0Real-Time\u00a0Distributed Embedded Systems,\u201d In Proc. of the 21st\u00a0International Conference on Distributed Computing\u00a0Systems(ICDCS), Phoenix, USA, April 2001.<br \/>\n[6] J. Lehoczky, L. Sha, and Y. Ding. The rate monotonic\u00a0scheduling algorithm: exact characterization and average<br \/>\ncase behavior. In Proceedings of the IEEE Real-Time\u00a0Systems Symposium, pages 166\u2013171, 1989.<br \/>\n[7] T. P. Baker. An Analysis of Fixed-Priority Schedulability\u00a0on a Multiprocessor. Real-Time Systems, 32(1-2):49\u201371,<br \/>\n2006.<\/p>\n<p style=\"text-align: justify\">[8] Akash Kumar,\u201dScheduling for Fault-Tolerant Distributed\u00a0Embedded Systems\u201d,IEEE Computer 2008.<br \/>\n[9] Babaoglu, Ozalp, and Keith Marzullo. Consistent global\u00a0states of distributed systems: Fundamental concepts and<br \/>\nmechanisms. In S. Mullender, editor, Distributed Systems,\u00a0pages 55\u201396. Addison-Wesley, 1993. [Cited at p. 27]<br \/>\n[10] M. Joseph and P. Pandya. Finding Response Times in a\u00a0Real-Time System. The Computer Journal, 29(5):390\u2013395,\u00a01986.<br \/>\n[11] L. Lundberg. Analyzing FixedPriority\u00a0Global Multiprocessor Scheduling. In Proceedings\u00a0of the IEEE Real-Time Technology and Applications\u00a0Symposium, pages 145\u2013153, 2002.<\/p>\n<p style=\"text-align: justify\">\u00a0[12] Y. Oh and S. H. Son, &#8220;Scheduling Real-Time Tasks for\u00a0Dependability,&#8221; Journal of Operational Research Society,\u00a0vol. 48, no. 6, pp 629-639, June 1997.<br \/>\n[13] Paul Stelling, Cheryl DeMatteis, Ian T. Foster, Carl\u00a0Kesselman, Craig A. Lee, and Gregor von Laszewski. A<br \/>\nfault detection service for wide area distributed\u00a0computations. Cluster Computing, 2(2):117\u2013128, 1999.<br \/>\n[Cited at p. 27, 43]<br \/>\n[14] N. Audsley, A. Burns,M. Richardson, K. Tindell, and\u00a0A.J.Wellings. Applying new scheduling theory to static<br \/>\npriority pre-emptive scheduling. Software Engineering\u00a0Journal, 8(5):284\u2013292, 1993.<br \/>\n[15] X. Qin, H. Jiang, and D.R. Swanson, \u201cA Fault-tolerant Real\u00a0time Scheduling Algorithm for Precedence-Constrained\u00a0Tasks in Distributed Heterogeneous Systems,\u201d Technical\u00a0Report TRUNL- CSE 2001-1003, Department of Computer\u00a0Science and Engineering, University of Nebraska-Lincoln,\u00a0September 2001.<\/p>\n<p>[16] Marcos K. Aguilera1, Gerard Le Lann, and Sam Toueg. On\u00a0the impact of fast failure detectors on real-time faulttolerant\u00a0systems. Springer-Verlag DISC 2002, pages 354 \u2013 369,\u00a02002. [Cited at p. 5, 27]<br \/>\n[17] L. Sha, J. P. Lehoczky, and R. Rajkumar. Solutions for\u00a0Some Practical Problems in Prioritized Preemptive<br \/>\nScheduling. In Proceedings of the IEEE Real-Time Systems\u00a0Symposium, pages 181\u2013191, 1986.<br \/>\n[18] S. Baruah and J. Goossens. The Static-priority Scheduling\u00a0of Periodic Task Systems upon Identical Multiprocessor\u00a0Platforms. in Proc. of the IASTED Int. Conf. on PDCS,\u00a0pages 427\u2013432, 2003.<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>1Vipinder Bagga,\u00a0 2Akhilesh Pandey 1Suresh Gyan Vihar University, Jaipur 2Suresh Gyan Vihar University, Jaipur Abstract: To provide the performance analysis of off-line scheduling algorithms which address the issues of fault tolerance, reliability, real-time, task precedence constraints and heterogeneity in real time systems. The performance measures of these algorithms on which they are differentiated on each [&hellip;]<\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[26,27],"tags":[],"class_list":["post-1812","post","type-post","status-publish","format-standard","hentry","category-international-journal-of-converging-technologies-management","category-volume-1-issue-1-2015"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.7 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>research journal - Research Journal<\/title>\n<meta name=\"description\" content=\"the performance analysis of off-line scheduling algorithms which address the issues of fault tolerance, reliability, real-time,task precedence constraints.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.gyanvihar.org\/journals\/efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Efficient Fault-Tolerant Scheduling &amp; Backup Overloading Techniques\" \/>\n<meta property=\"og:description\" content=\"the performance analysis of off-line scheduling algorithms which address the issues of fault tolerance, reliability, real-time,task precedence constraints.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.gyanvihar.org\/journals\/efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system\/\" \/>\n<meta property=\"og:site_name\" content=\"research journal\" \/>\n<meta property=\"article:published_time\" content=\"2018-12-05T08:52:28+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2019-06-12T05:48:49+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.gyanvihar.org\/journals\/uploads\/2018\/12\/Capture-104.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"266\" \/>\n\t<meta property=\"og:image:height\" content=\"474\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"gyanvihar3\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:title\" content=\"Efficient Fault-Tolerant Scheduling &amp; Backup Overloading Techniques\" \/>\n<meta name=\"twitter:description\" content=\"the performance analysis of off-line scheduling algorithms which address the issues of fault tolerance, reliability, real-time,task precedence constraints.\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"gyanvihar3\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"22 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system\/\",\"url\":\"https:\/\/www.gyanvihar.org\/journals\/efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system\/\",\"name\":\"Efficient Fault-Tolerant Scheduling & Backup Overloading Techniques\",\"isPartOf\":{\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system\/#primaryimage\"},\"thumbnailUrl\":\"http:\/\/www.gyanvihar.org\/journals\/wp-content\/uploads\/2018\/12\/Capture-104.jpg\",\"datePublished\":\"2018-12-05T08:52:28+00:00\",\"dateModified\":\"2019-06-12T05:48:49+00:00\",\"author\":{\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/#\/schema\/person\/0fee890b071b4083d5422be043bb99e9\"},\"description\":\"the performance analysis of off-line scheduling algorithms which address the issues of fault tolerance, reliability, real-time,task precedence constraints.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.gyanvihar.org\/journals\/efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system\/#primaryimage\",\"url\":\"https:\/\/www.gyanvihar.org\/journals\/uploads\/2018\/12\/Capture-104.jpg\",\"contentUrl\":\"https:\/\/www.gyanvihar.org\/journals\/uploads\/2018\/12\/Capture-104.jpg\",\"width\":266,\"height\":474},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.gyanvihar.org\/journals\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Efficient fault tolerant Scheduling techniques And Backup Overloading techniques For Real Time System\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/#website\",\"url\":\"https:\/\/www.gyanvihar.org\/journals\/\",\"name\":\"research journal\",\"description\":\"Research Journal\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.gyanvihar.org\/journals\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/#\/schema\/person\/0fee890b071b4083d5422be043bb99e9\",\"name\":\"gyanvihar3\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.gyanvihar.org\/journals\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/ea51e7e852346f1b6b7715e7b9b893df?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/ea51e7e852346f1b6b7715e7b9b893df?s=96&d=mm&r=g\",\"caption\":\"gyanvihar3\"},\"url\":\"https:\/\/www.gyanvihar.org\/journals\/author\/gyanvihar3\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"research journal - Research Journal","description":"the performance analysis of off-line scheduling algorithms which address the issues of fault tolerance, reliability, real-time,task precedence constraints.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.gyanvihar.org\/journals\/efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system\/","og_locale":"en_US","og_type":"article","og_title":"Efficient Fault-Tolerant Scheduling & Backup Overloading Techniques","og_description":"the performance analysis of off-line scheduling algorithms which address the issues of fault tolerance, reliability, real-time,task precedence constraints.","og_url":"https:\/\/www.gyanvihar.org\/journals\/efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system\/","og_site_name":"research journal","article_published_time":"2018-12-05T08:52:28+00:00","article_modified_time":"2019-06-12T05:48:49+00:00","og_image":[{"width":266,"height":474,"url":"https:\/\/www.gyanvihar.org\/journals\/uploads\/2018\/12\/Capture-104.jpg","type":"image\/jpeg"}],"author":"gyanvihar3","twitter_card":"summary_large_image","twitter_title":"Efficient Fault-Tolerant Scheduling & Backup Overloading Techniques","twitter_description":"the performance analysis of off-line scheduling algorithms which address the issues of fault tolerance, reliability, real-time,task precedence constraints.","twitter_misc":{"Written by":"gyanvihar3","Est. reading time":"22 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.gyanvihar.org\/journals\/efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system\/","url":"https:\/\/www.gyanvihar.org\/journals\/efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system\/","name":"Efficient Fault-Tolerant Scheduling & Backup Overloading Techniques","isPartOf":{"@id":"https:\/\/www.gyanvihar.org\/journals\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.gyanvihar.org\/journals\/efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system\/#primaryimage"},"image":{"@id":"https:\/\/www.gyanvihar.org\/journals\/efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system\/#primaryimage"},"thumbnailUrl":"http:\/\/www.gyanvihar.org\/journals\/wp-content\/uploads\/2018\/12\/Capture-104.jpg","datePublished":"2018-12-05T08:52:28+00:00","dateModified":"2019-06-12T05:48:49+00:00","author":{"@id":"https:\/\/www.gyanvihar.org\/journals\/#\/schema\/person\/0fee890b071b4083d5422be043bb99e9"},"description":"the performance analysis of off-line scheduling algorithms which address the issues of fault tolerance, reliability, real-time,task precedence constraints.","breadcrumb":{"@id":"https:\/\/www.gyanvihar.org\/journals\/efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.gyanvihar.org\/journals\/efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.gyanvihar.org\/journals\/efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system\/#primaryimage","url":"https:\/\/www.gyanvihar.org\/journals\/uploads\/2018\/12\/Capture-104.jpg","contentUrl":"https:\/\/www.gyanvihar.org\/journals\/uploads\/2018\/12\/Capture-104.jpg","width":266,"height":474},{"@type":"BreadcrumbList","@id":"https:\/\/www.gyanvihar.org\/journals\/efficient-fault-tolerant-scheduling-techniques-and-backup-overloading-techniques-for-real-time-system\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.gyanvihar.org\/journals\/"},{"@type":"ListItem","position":2,"name":"Efficient fault tolerant Scheduling techniques And Backup Overloading techniques For Real Time System"}]},{"@type":"WebSite","@id":"https:\/\/www.gyanvihar.org\/journals\/#website","url":"https:\/\/www.gyanvihar.org\/journals\/","name":"research journal","description":"Research Journal","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.gyanvihar.org\/journals\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.gyanvihar.org\/journals\/#\/schema\/person\/0fee890b071b4083d5422be043bb99e9","name":"gyanvihar3","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.gyanvihar.org\/journals\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/ea51e7e852346f1b6b7715e7b9b893df?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/ea51e7e852346f1b6b7715e7b9b893df?s=96&d=mm&r=g","caption":"gyanvihar3"},"url":"https:\/\/www.gyanvihar.org\/journals\/author\/gyanvihar3\/"}]}},"_links":{"self":[{"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/posts\/1812","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/comments?post=1812"}],"version-history":[{"count":1,"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/posts\/1812\/revisions"}],"predecessor-version":[{"id":1864,"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/posts\/1812\/revisions\/1864"}],"wp:attachment":[{"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/media?parent=1812"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/categories?post=1812"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.gyanvihar.org\/journals\/wp-json\/wp\/v2\/tags?post=1812"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}