Multi-Agent Reinforcement Learning Approaches for Distributed Job-Shop Scheduling Problems
Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen:
https://osnadocs.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2009081216
https://osnadocs.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2009081216
Titel: | Multi-Agent Reinforcement Learning Approaches for Distributed Job-Shop Scheduling Problems |
Autor(en): | Gabel, Thomas |
Erstgutachter: | Prof. Dr. Martin Riedmiller |
Zweitgutachter: | Prof. Dr. Hector Munoz-Avila |
Zusammenfassung: | Decentralized decision-making is an active research topic in artificial intelligence. In a distributed system, a number of individually acting agents coexist. If they strive to accomplish a common goal, the establishment of coordinated cooperation between the agents is of utmost importance. With this in mind, our focus is on multi-agent reinforcement learning (RL) methods which allow for automatically acquiring cooperative policies based solely on a specification of the desired joint behavior of the whole system.The decentralization of the control and observation of the system among independent agents, however, has a significant impact on problem complexity. Therefore, we address the intricacy of learning and acting in multi-agent systems by two complementary approaches.First, we identify a subclass of general decentralized decision-making problems that features regularities in the way the agents interact with one another. We show that the complexity of optimally solving a problem instance from this class is provably lower than solving a general one.Although a lower complexity class may be entered by sticking to certain subclasses of general multi-agent problems, the computational complexitymay be still so high that optimally solving it is infeasible. Hence, our second goal is to develop techniques capable of quickly obtaining approximate solutions in the vicinity of the optimum. To this end, we will develop and utilize various model-free reinforcement learning approaches.Many real-world applications are well-suited to be formulated in terms of spatially or functionally distributed entities. Job-shop scheduling represents one such application. We are going to interpret job-shop scheduling problems as distributed sequential decision-making problems, to employ the multi-agent RL algorithms we propose for solving such problems, and to evaluate the performance of our learning approaches in the scope of various established scheduling benchmark problems. |
URL: | https://osnadocs.ub.uni-osnabrueck.de/handle/urn:nbn:de:gbv:700-2009081216 |
Schlagworte: | reinforcement learning; multi-agent systems; decentralized control; job-shop scheduling; neural networks; DEC-MDP; multi-agent learning |
Erscheinungsdatum: | 10-Aug-2009 |
Einreichungsdatum: | 10-Aug-2009 |
Publikationstyp: | Dissertation oder Habilitation [doctoralThesis] |
Enthalten in den Sammlungen: | FB06 - E-Dissertationen |
Dateien zu dieser Ressource:
Datei | Beschreibung | Größe | Format | |
---|---|---|---|---|
E-Diss925_thesis.pdf | Präsentationsformat | 2,76 MB | Adobe PDF | E-Diss925_thesis.pdf Öffnen/Anzeigen |
Alle Ressourcen im Repositorium osnaDocs sind urheberrechtlich geschützt, soweit nicht anderweitig angezeigt. rightsstatements.org