Computational fluid dynamics (CFD) is widely used in science and engineering. These kinds of simulations demand a large amount of computational resources and are difficult to run repeatedly under various condition settings. Recently, deep learning has attracted attention as a surrogate method for obtaining calculation results by CFD simulation approximately at high speed. We are developing a parallelization method to make it possible to apply the surrogate method based on the deep learning to large scale geometry. This method predicts large-scale steady flow simulation results by dividing the input geometry into multiple parts and applying a single small neural network to each part in parallel. This method is developed based on considering the characteristics of CFD simulation and the consistency of the boundary condition of each divided subdomain. By using the physical values on the adjacent subdomains as boundary conditions, applying deep learning to each subdomain can predict simulation results consistently in the entire computational domain. As a final goal, we intend to apply this developed parallel method to the surrogate for approximating the 3D blood flow simulation. In this presentation, we will introduce this method and show the predicted results of steady flows by using this method.