Skip to content

Commit

Permalink
fix update mkdocs
Browse files Browse the repository at this point in the history
Signed-off-by: Jan Jansen <[email protected]>
(cherry picked from commit 212590b)
(cherry picked from commit 93394de)

# Conflicts:
#	docs/advanced-topics/hadoop.md
#	docs/changelog.md
#	docs/interactions/connecting/java.md
#	docs/storage-backend/scylladb.md
#	mkdocs.yml
#	requirements.txt
  • Loading branch information
farodin91 committed Nov 16, 2023
1 parent 3a08928 commit a73a843
Show file tree
Hide file tree
Showing 10 changed files with 515 additions and 424 deletions.
2 changes: 1 addition & 1 deletion docs.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.

FROM python:3.8
FROM python:3.12

ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/root/.local/bin

Expand Down
20 changes: 14 additions & 6 deletions docs/advanced-topics/commit-releases.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,17 +19,21 @@ under the same group id `org.janusgraph`, but with different artifact id format.
Official JanusGraph releases have the next format for artifact id: `MAJOR.MINOR.PATCH`.
Dependencies example:

```xml tab='Maven'
/// tab | Maven
```xml
<dependency>
<groupId>org.janusgraph</groupId>
<artifactId>janusgraph-core</artifactId>
<version>0.6.2</version>
<version>1.0.0</version>
</dependency>
```
///

```groovy tab='Gradle'
compile "org.janusgraph:janusgraph-core:0.6.2"
/// tab | Gradle
```groovy
compile "org.janusgraph:janusgraph-core:1.0.0"
```
///

Artifact id for commit releases have the next format: `FOLLOWING_VERSION-DATE-TIME.COMMIT`.

Expand All @@ -41,17 +45,21 @@ It has `MAJOR.MINOR.PATCH` format.

Dependencies example:

```xml tab='Maven'
/// tab | Maven
```xml
<dependency>
<groupId>org.janusgraph</groupId>
<artifactId>janusgraph-core</artifactId>
<version>0.6.3-20230104-164606.a49366e</version>
</dependency>
```
///

```groovy tab='Gradle'
/// tab | Gradle
```groovy
compile "org.janusgraph:janusgraph-core:0.6.3-20230104-164606.a49366e"
```
///

## JanusGraph distribution builds

Expand Down
250 changes: 135 additions & 115 deletions docs/advanced-topics/hadoop.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ computations of various OLAP queries may be persisted on the Hadoop file
system.

For configuring a single node Hadoop cluster, please refer to official
[Apache Hadoop Docs](https://hadoop.apache.org/docs/r{{hadoop2_version }}/hadoop-project-dist/hadoop-common/SingleCluster.html)
[Apache Hadoop Docs](https://hadoop.apache.org/docs/r{{ hadoop2_version }}/hadoop-project-dist/hadoop-common/SingleCluster.html)

Once you have a Hadoop cluster up and running, we will need to specify
the Hadoop configuration files in the `CLASSPATH`. The below document
Expand Down Expand Up @@ -83,70 +83,80 @@ JanusGraph directly supports following graphReader classes:
The following `.properties` files can be used to connect a JanusGraph
instance such that it can be used with HadoopGraph to run OLAP queries.

```properties tab='read-cql.properties'
{!../janusgraph-dist/src/assembly/static/conf/hadoop-graph/read-cql.properties!}
/// tab | read-cql.properties
```properties
{%
include "../../janusgraph-dist/src/assembly/static/conf/hadoop-graph/read-cql.properties"
%}
```
///

```properties tab='read-hbase.properties'
{!../janusgraph-dist/src/assembly/static/conf/hadoop-graph/read-hbase.properties!}
/// tab | read-hbase.properties
```properties
{%
include "../../janusgraph-dist/src/assembly/static/conf/hadoop-graph/read-hbase.properties"
%}
```
///

First create a properties file with above configurations, and load the
same on the Gremlin Console to run OLAP queries as follows:

=== "read-cql.properties"
```bash
bin/gremlin.sh

\,,,/
(o o)
-----oOOo-(3)-oOOo-----
plugin activated: janusgraph.imports
gremlin> :plugin use tinkerpop.hadoop
==>tinkerpop.hadoop activated
gremlin> :plugin use tinkerpop.spark
==>tinkerpop.spark activated
gremlin> // 1. Open a the graph for OLAP processing reading in from Cassandra 3
gremlin> graph = GraphFactory.open('conf/hadoop-graph/read-cql.properties')
==>hadoopgraph[cqlinputformat->gryooutputformat]
gremlin> // 2. Configure the traversal to run with Spark
gremlin> g = graph.traversal().withComputer(SparkGraphComputer)
==>graphtraversalsource[hadoopgraph[cqlinputformat->gryooutputformat], sparkgraphcomputer]
gremlin> // 3. Run some OLAP traversals
gremlin> g.V().count()
......
==>808
gremlin> g.E().count()
......
==> 8046
```

=== "read-hbase.properties"
```bash
bin/gremlin.sh

\,,,/
(o o)
-----oOOo-(3)-oOOo-----
plugin activated: janusgraph.imports
gremlin> :plugin use tinkerpop.hadoop
==>tinkerpop.hadoop activated
gremlin> :plugin use tinkerpop.spark
==>tinkerpop.spark activated
gremlin> // 1. Open a the graph for OLAP processing reading in from HBase
gremlin> graph = GraphFactory.open('conf/hadoop-graph/read-hbase.properties')
==>hadoopgraph[hbaseinputformat->gryooutputformat]
gremlin> // 2. Configure the traversal to run with Spark
gremlin> g = graph.traversal().withComputer(SparkGraphComputer)
==>graphtraversalsource[hadoopgraph[hbaseinputformat->gryooutputformat], sparkgraphcomputer]
gremlin> // 3. Run some OLAP traversals
gremlin> g.V().count()
......
==>808
gremlin> g.E().count()
......
==> 8046
```
/// tab | read-cql.properties
```bash
bin/gremlin.sh

\,,,/
(o o)
-----oOOo-(3)-oOOo-----
plugin activated: janusgraph.imports
gremlin> :plugin use tinkerpop.hadoop
==>tinkerpop.hadoop activated
gremlin> :plugin use tinkerpop.spark
==>tinkerpop.spark activated
gremlin> // 1. Open a the graph for OLAP processing reading in from Cassandra 3
gremlin> graph = GraphFactory.open('conf/hadoop-graph/read-cql.properties')
==>hadoopgraph[cqlinputformat->gryooutputformat]
gremlin> // 2. Configure the traversal to run with Spark
gremlin> g = graph.traversal().withComputer(SparkGraphComputer)
==>graphtraversalsource[hadoopgraph[cqlinputformat->gryooutputformat], sparkgraphcomputer]
gremlin> // 3. Run some OLAP traversals
gremlin> g.V().count()
......
==>808
gremlin> g.E().count()
......
==> 8046
```
///
/// tab | read-hbase.properties
```bash
bin/gremlin.sh

\,,,/
(o o)
-----oOOo-(3)-oOOo-----
plugin activated: janusgraph.imports
gremlin> :plugin use tinkerpop.hadoop
==>tinkerpop.hadoop activated
gremlin> :plugin use tinkerpop.spark
==>tinkerpop.spark activated
gremlin> // 1. Open a the graph for OLAP processing reading in from HBase
gremlin> graph = GraphFactory.open('conf/hadoop-graph/read-hbase.properties')
==>hadoopgraph[hbaseinputformat->gryooutputformat]
gremlin> // 2. Configure the traversal to run with Spark
gremlin> g = graph.traversal().withComputer(SparkGraphComputer)
==>graphtraversalsource[hadoopgraph[hbaseinputformat->gryooutputformat], sparkgraphcomputer]
gremlin> // 3. Run some OLAP traversals
gremlin> g.V().count()
......
==>808
gremlin> g.E().count()
......
==> 8046
```
///
### OLAP Traversals with Spark Standalone Cluster
Expand All @@ -169,69 +179,79 @@ standalone cluster with only minor changes:
The final properties file used for OLAP traversal is as follows:
```properties tab='read-cql-standalone-cluster.properties'
{!../janusgraph-dist/src/assembly/static/conf/hadoop-graph/read-cql-standalone-cluster.properties!}
/// tab | read-cql-standalone-cluster.properties
```properties
{%
include "../../janusgraph-dist/src/assembly/static/conf/hadoop-graph/read-cql-standalone-cluster.properties"
%}
```
///
```properties tab='read-hbase-standalone-cluster.properties'
{!../janusgraph-dist/src/assembly/static/conf/hadoop-graph/read-hbase-standalone-cluster.properties!}
/// tab | read-hbase-standalone-cluster.properties
```properties
{%
include "../../janusgraph-dist/src/assembly/static/conf/hadoop-graph/read-hbase-standalone-cluster.properties"
%}
```
///
Then use the properties file as follows from the Gremlin Console:
=== "read-cql-standalone-cluster.properties"
```bash
bin/gremlin.sh

\,,,/
(o o)
-----oOOo-(3)-oOOo-----
plugin activated: janusgraph.imports
gremlin> :plugin use tinkerpop.hadoop
==>tinkerpop.hadoop activated
gremlin> :plugin use tinkerpop.spark
==>tinkerpop.spark activated
gremlin> // 1. Open a the graph for OLAP processing reading in from Cassandra 3
gremlin> graph = GraphFactory.open('conf/hadoop-graph/read-cql-standalone-cluster.properties')
==>hadoopgraph[cqlinputformat->gryooutputformat]
gremlin> // 2. Configure the traversal to run with Spark
gremlin> g = graph.traversal().withComputer(SparkGraphComputer)
==>graphtraversalsource[hadoopgraph[cqlinputformat->gryooutputformat], sparkgraphcomputer]
gremlin> // 3. Run some OLAP traversals
gremlin> g.V().count()
......
==>808
gremlin> g.E().count()
......
==> 8046
```

=== "read-hbase-standalone-cluster.properties"
```bash
bin/gremlin.sh

\,,,/
(o o)
-----oOOo-(3)-oOOo-----
plugin activated: janusgraph.imports
gremlin> :plugin use tinkerpop.hadoop
==>tinkerpop.hadoop activated
gremlin> :plugin use tinkerpop.spark
==>tinkerpop.spark activated
gremlin> // 1. Open a the graph for OLAP processing reading in from HBase
gremlin> graph = GraphFactory.open('conf/hadoop-graph/read-hbase-standalone-cluster.properties')
==>hadoopgraph[hbaseinputformat->gryooutputformat]
gremlin> // 2. Configure the traversal to run with Spark
gremlin> g = graph.traversal().withComputer(SparkGraphComputer)
==>graphtraversalsource[hadoopgraph[hbaseinputformat->gryooutputformat], sparkgraphcomputer]
gremlin> // 3. Run some OLAP traversals
gremlin> g.V().count()
......
==>808
gremlin> g.E().count()
......
==> 8046
```
/// tab | read-cql-standalone-cluster.properties
```bash
bin/gremlin.sh

\,,,/
(o o)
-----oOOo-(3)-oOOo-----
plugin activated: janusgraph.imports
gremlin> :plugin use tinkerpop.hadoop
==>tinkerpop.hadoop activated
gremlin> :plugin use tinkerpop.spark
==>tinkerpop.spark activated
gremlin> // 1. Open a the graph for OLAP processing reading in from Cassandra 3
gremlin> graph = GraphFactory.open('conf/hadoop-graph/read-cql-standalone-cluster.properties')
==>hadoopgraph[cqlinputformat->gryooutputformat]
gremlin> // 2. Configure the traversal to run with Spark
gremlin> g = graph.traversal().withComputer(SparkGraphComputer)
==>graphtraversalsource[hadoopgraph[cqlinputformat->gryooutputformat], sparkgraphcomputer]
gremlin> // 3. Run some OLAP traversals
gremlin> g.V().count()
......
==>808
gremlin> g.E().count()
......
==> 8046
```
///
/// tab | read-hbase-standalone-cluster.properties
```bash
bin/gremlin.sh

\,,,/
(o o)
-----oOOo-(3)-oOOo-----
plugin activated: janusgraph.imports
gremlin> :plugin use tinkerpop.hadoop
==>tinkerpop.hadoop activated
gremlin> :plugin use tinkerpop.spark
==>tinkerpop.spark activated
gremlin> // 1. Open a the graph for OLAP processing reading in from HBase
gremlin> graph = GraphFactory.open('conf/hadoop-graph/read-hbase-standalone-cluster.properties')
==>hadoopgraph[hbaseinputformat->gryooutputformat]
gremlin> // 2. Configure the traversal to run with Spark
gremlin> g = graph.traversal().withComputer(SparkGraphComputer)
==>graphtraversalsource[hadoopgraph[hbaseinputformat->gryooutputformat], sparkgraphcomputer]
gremlin> // 3. Run some OLAP traversals
gremlin> g.V().count()
......
==>808
gremlin> g.E().count()
......
==> 8046
```
///
## Other Vertex Programs
Expand Down
Loading

0 comments on commit a73a843

Please sign in to comment.