Quantum repeaters play a crucial role in the effective distribution of entanglement over long distances. The nearest-future type of quantum repeater requires two operations: entanglement generation across neighbouring repeaters and entanglement swapping to promote short-range entanglement to long-range. For many hardware setups, these actions are probabilistic, leading to longer distribution times and incurred errors. Significant efforts have been vested in finding the optimal entanglement-distribution policy, i.e. the protocol specifying when a network node needs to generate or swap entanglement, such that the expected time to distribute long-distance entanglement is minimal. This problem is even more intricate in more realistic scenarios, especially when classical communication delays are taken into account. In this work, we formulate our problem as a Markov decision problem and use reinforcement learning (RL) to optimise over centralised strategies, where one designated node instructs other nodes which actions to perform. Contrary to most RL models, ours can be readily interpreted. Additionally, we introduce and evaluate a fixed local policy, the ‘predictive swap-asap’ policy, where nodes only coordinate with nearest neighbours. Compared to the straightforward generalization of the common swap-asap policy to the scenario with classical communication effects, the ‘wait-for-broadcast swap-asap’ policy, both of the aforementioned entanglement-delivery policies are faster at high success probabilities. Our work showcases the merit of considering policies acting with incomplete information in the realistic case when classical communication effects are significant.