http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
가상발전소 최적 운영을 위한 강화학습 기반 에너지 저장장치 제어
권경빈,박종영,정호성,홍수민,허재행 대한전기학회 2023 전기학회논문지 Vol.72 No.11
In this paper, we design a framework of the energy storage system (ESS) controller in virtual power plant (VPP) that maximize the profit. We consider the VPP that includes photovoltaics, wind turbines and demand along with ESSs and describe the environment based on Markov decision process (MDP). To find the best policy for ESS charging and discharging control, we implement a deep Q-network (DQN) method that trains a neural network which estimates Q-function values for each possible discrete actions. In the numerical test utilizing real-world data of Namgwangju Station, ERCOT and US government, we train the DQN and demonstrate that the proposed algorithm converges. Through the test with the trained policy, we showcase that the policy functions effectively in the scenario with uncertainty from renewable generations and load, as it responds adaptively to electricity prices.
수요와 공급의 불확실성을 고려한 시간대별 순동예비력 산정 방안
권경빈(Kyung-Bin Kwon),박현곤(Hyeon-Gon Park),류재근(Jae-Kun Lyu),김유창(Yu-Chang Kim),박종근(Jong-Keun Park) 대한전기학회 2013 전기학회논문지 Vol.62 No.11
Renewable energy integration and increased system complexities make system operator maintain supply and demand balance harder than before. To keep the grid frequency in a stable range, an appropriate spinning reserve margin should be procured with consideration of ever-changing system situation, such as demand, wind power output and generator failure. This paper propose a novel concept of dynamic reserve, which arrange different spinning reserve margin depending on time. To investigate the effectiveness of the proposed dynamic reserve, we developed a new short-term reliability criterion that estimates the probability of a spinning reserve shortage events, thus indicating grid frequency stability. Uncertainties of demand forecast error, wind generation forecast error and generator failure have been modeled in probabilistic terms, and the proposed spinning reserve has been applied to generation scheduling. This approach has been tested on the modified IEEE 118-bus system with a wind farm. The results show that the required spinning reserve margin changes depending on the system situation of demand, wind generation and generator failure. Moreover the proposed approach could be utilized even in case of system configuration change, such as wind generation extension.
역사 내 미세먼지 농도 조절을 위한 강화학습 기반의 공조설비 제어 에이전트 구축
권경빈(Kyung-bin Kwon),홍수민(Sumin Hong),허재행(Jae-Haeng Heo),정호성(Hosung Jung),박종영(Jong-young Park) 대한전기학회 2021 전기학회논문지 Vol.70 No.10
This study developed a reinforcement learning-based energy management agent that controls the concentration of fine dust by controlling the power consumption of energy facilities such as air conditioners and blowers in stations. To apply reinforcement learning, the problem was first defined based on the Markov decision-making process, and a model was developed to predict the concentration of fine dust in history using data correlated with fine dust. Based on the linear compensation function created based on this, the Deep Q-Network (DQN) method was applied to obtain the optimal policy based on the artificial neural network. In the case study, it was confirmed that convergence to the optimal policy was achieved through the learning process, and it was confirmed that the learned agent lowers the fine dust concentration by increasing the power consumption of the air conditioner when the fine dust concentration in the station rises above a certain level.
공조설비와 에너지저장장치 제어를 위한 강화학습 기반 에너지관리 에이전트 개발
박종영,권경빈,홍수민,허재행,정호성 대한전기학회 2022 전기학회논문지 Vol.71 No.10
In this paper, we propose an energy management agent that controls HVAC facilities and ESS by using the Policy Gradient Method, which is one of the reinforcement learning techniques. For this purpose, based on supervised learning, an artificial neural network was constructed to predict the change in the concentration of fine dust in stations according to the control of fine dust reduction facilities. This was used as a transfer function of the Markov decision process, and the optimal policy based on the normal distribution expressed as conditional probability was obtained through the policy gradient method. In the case study, using the actual data of Nam-Gwangju Station, learning of the energy management agent based on artificial neural network and policy gradient method was conducted. It was confirmed that the total electricity cost was reduced by adjusting the charging and discharging of the energy storage device according to the electricity price by time period.