Graybat  1.1
Graph Approach for Highly Generic Communication Schemes Based on Adaptive Topologies
Communication and Graph Environment

The communication and graph environment (cage) provides a graph-based virtual overlay network which is implemented by the policy based design. Taking this term to pieces, the cage is an interface which provides communication methods on basis of an existing communication library, where the possible paths of communication are described by a graph.

The behavior of the cage need to be defined by a communication policy and a graph policy. These policies need to be provided as template arguments to the cage class. The following listings show examples on how to use and how to configure the cage with predefined communication policy and graph policy.

Configure the graybat cage

  1. Include graybat cage, communication policy, graph policy and predefined functors for communication patterns and mappings.
    #include <graybat/Cage.hpp>
    #include <graybat/communicationPolicy/BMPI.hpp>
    #include <graybat/graphPolicy/BGL.hpp>
    #include <graybat/pattern/GridDiagonal.hpp>
    #include <graybat/mapping/Consecutive.hpp>
  2. Define communication policy to use (boost.MPI).
    typedef graybat::communicationPolicy::BMPI CommunicationPolicy;
    typedef typename CommunicationPolicy::Config Config;
  3. Define graph policy to use (boost graph library).
    typedef graybat::graphPolicy::BGL<> GraphPolicy;
  4. Define cage through policies.
  5. Create Cage instance and set graph creation functor that describes the communication pattern.
    Cage cage;
    cage.setGraph(graybat::pattern::GridDiagonal(100,100))

Mapping Operations

  1. distribute: Distributes vertices of the graph to peer(s) based on a mapping.
    cage.distribute(graybat::mapping::Consecutive());
  2. hostedVertices: The vertices mapped to the peer itself are called hosted vertices.
    typedef Cage::Vertex Vertex;
    // Iterate over all vertices that I host
    for(Vertex vertex: cage.hostedVertices){
    std::cout << vertex.id << std::endl;
    }

Graph Operations

A peer is responsible for the communication of all its hosted vertices. Therefore, the common approach for communication in graybat is to iterate over the set of hosted vertices and send data to adjacent vertices which are connected with an outgoing edge and receive data from adjacent vertices which are connected with an incoming edge.

  1. getOutEdges: Retrieve outgoing edges of hosted vertices. This information can be used to send data to adjacent vertices.
    typedef Cage::Edge Edge;
    for(Vertex vertex: cage.hostedVertices){
    for(Edge outEdge : cage.getOutEdges(vertex)){
    Vertex sourceVertex = outEdge.source;
    Vertex targetVertex = outEdge.target;
    std::cout << "From vertex " << sourceVertex.id
    << "over edge " << outEdge.id
    << "to vertex " << targetVertex.id << std::endl;
    }
    }
  2. getInEdges: Retrieve incoming edges of hosted vertices. This information can be used to receive data from adjacent vertices.
    for(Vertex vertex: cage.hostedVertices){
    for(Edge inEdge : cage.getInEdges(vertex)){
    Vertex sourceVertex = inEdge.source;
    Vertex targetVertex = inEdge.target;
    std::cout << "From vertex " << sourceVertex.id
    << "over edge " << inEdge.id
    << "to vertex " << targetVertex.id << std::endl;
    }
    }

Point to Point Communication Operations

  • send/asyncSend: Send synchronous and asynchronous data. The asynchronous version of send returns an event, which can be waited for or tested for its state.
    typedef Cage::Edge Edge;
    typedef Cage::Event Event;
    // Events that will occur
    std::vector<Event> events;
    // Some data that should be send
    std::vector<int> data(100,1);
    for(Vertex vertex: cage.hostedVertices){
    for(Edge outEdge : cage.getOutEdges(vertex)){
    // Synchronous
    cage.send(outEdge, data);
    // Asynchronous
    cage.send(outEdge, data, events);
    // Wait for and test event
    events.back().wait();
    bool eventState = events.back().ready();
    }
    }
  • recv/asyncRecv: Receive synchronous and asynchronous data. The asynchronous version of recv returns an event, which can be waited for or tested for its state.
    typedef Cage::Edge Edge;
    typedef Cage::Event Event;
    // Events that will occur
    std::vector<Event> events;
    // Some data that should be send
    std::vector<int> data(100,1);
    for(Vertex vertex: cage.hostedVertices){
    for(Edge inEdge : cage.getInEdges(vertex)){
    // Synchronous receive
    cage.recv(inEdge, data);
    // Asynchronous receive
    cage.recv(inEdge, data, events);
    // Wait for and test event
    events.back().wait();
    bool eventState = events.back().ready();
    }
    }

Collective Communication Operations

  • spread: Spread data to all adjacent vertices of a vertex that are connected by an outgoing edge.
    typedef Cage::Edge Edge;
    typedef Cage::Event Event;
    // Events that will occur
    std::vector<Event> events;
    // Some data that should be send
    std::vector<int> data(100,1);
    for(Vertex vertex: cage.hostedVertices){
    cage.spread(vertex, data, events);
    }
  • collect: Collect data from all adjacent vertices of a vertex that are connected by an incoming edge.
    typedef Cage::Edge Edge;
    typedef Cage::Event Event;
    // Events that will occur
    std::vector<Event> events;
    for(Vertex vertex: cage.hostedVertices){
    // Some data that should be collected
    std::vector<int> data(vertex.nInEdges * 100, 1);
    cage.collect(vertex, data);
    }
  • reduce: Reduce vector of data with binary operator and receive by some root vertex.
    Vertex rootVertex = cage.getVertex(0);
    std::vector<int> send(100);
    std::vector<int> recv(100);
    // Each vertex need to reduce its data, the root receives reduction.
    for(Vertex vertex: cage.hostedVertices){
    cage.reduce(rootVertex, vertex, std::plus<int>, send, recv);
    }
  • allReduce: Reduce vector of data and receive them by every vertex.
    std::vector<int> send(100);
    std::vector<int> recv(100);
    // Each vertex need to reduce its data, all receive reduction.
    for(Vertex vertex: cage.hostedVertices){
    cage.allReduce(rootVertex, vertex, std::plus<int>, send, recv);
    }
  • gather: Root vertex collects data from each vertex.
    Vertex rootVertex = cage.getVertex(0);
    std::vector<int> send(10);
    std::vector<int> recv(10 * cage.getVertices().size());
    // Each vertex need to send its data, the root receives
    for(Vertex vertex: cage.hostedVertices){
    cage.gather(rootVertex, vertex, send, recv);
    }
  • allGather: Data is send to all vertices.
    std::vector<int> send(10);
    std::vector<int> recv(10 * cage.getVertices().size());
    // Each vertex need to send its data, all receive.
    for(Vertex vertex: cage.hostedVertices){
    cage.allGather(vertex, send, recv);
    }
  • synchronize: Synchronize all peers (including their hosted vertices).
    cage.synchronize();

Further Links