Please use this identifier to cite or link to this item:
|Title:||Building agents for power trading agent competition (TAC)||Authors:||Tian, Maokun||Keywords:||DRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence||Issue Date:||2015||Abstract:||The sustainable power system in the future not only needs environmental friendly, low-cost, and renewable energy sources, but also at the same time they need to be highly efficient in terms of distribution. Power Trading Agent Competition, in short Power TAC is an annual autonomous trading competition organized by AAMAS/AAAI, which builds a highly complex model of contemporary and future electricity wholesale and distribution markets. Main components involved in the simulation environment include customer models, electricity producers, and retail/wholesale brokers. Customer models represent households, businesses ranging from small to large scale, residential buildings, wind parks, owners of solar panels and electric vehicles, etc., who consume power; meanwhile some of them who can generate power can resell the excess to the market. The ultimate goal of brokers is to maximize profit through offering electricity tariffs to customers and trading electricity in the wholesale market – buy low & sell high, while carefully balancing the power supply and demand in their portfolios. Tariffs can either be power supply tariffs to customers who normally consume power or power purchase tariffs from customers who can generate power and have extra to resell. In the wholesale market, brokers can either buy or sell electricity from producers (generating cooperation), industrial sites and other brokers. Therefore in summary, brokers in the competition act as intermediary profit maximization parties. In this project, we aim to build a trading agent trying to make largest profit in the scenario described. As it is an annual competition, we study some of the research work/publications and behaviours of existing brokers done by other universities. In particular we investigate some classic algorithms, for example Markov Decision Process (MDP). We also make some assumptions and adopt some heuristic approaches in building the agent in order to simply the scenario.||URI:||http://hdl.handle.net/10356/62662||Rights:||Nanyang Technological University||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||SCSE Student Reports (FYP/IA/PA/PI)|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.