Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resolve codespell pre-commit errors #56

Merged
merged 2 commits into from
Feb 20, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ repos:
- id: codespell
name: codespell
description: Checks for common misspellings in text files
additional_dependencies: [tomli]
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.2.1
hooks:
Expand Down
11 changes: 6 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,8 @@

### From PyPI

**In development**
#### *pypi support in development*

<!-- ```
conda create --name test python=3.8
conda activate test
Expand All @@ -16,7 +17,7 @@ pip install GOSTnets

### From `conda-forge`

**In development**
#### *conda support in development*

### From Source

Expand Down Expand Up @@ -116,13 +117,13 @@ Plenty of examples and tutorials using Jupyter Notebooks live inside of the Impl

in the docs dir, run:

```
```shell
sphinx-apidoc -f -o source/ ../GOSTnets
```

in the docs dir, run:

```
```shell
make html
```

Expand All @@ -137,7 +138,7 @@ gn.edge_gdf_from_graph?

returns:

```
```python
Signature: gn.edge_gdf_from_graph(G, crs={'init': 'epsg:4326'}, attr_list=None, geometry_tag='geometry', xCol='x', yCol='y')
#### Function for generating a GeoDataFrame from a networkx Graph object ###
REQUIRED: a graph object G
Expand Down
2 changes: 2 additions & 0 deletions Tutorials/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Tutorial

These notebooks are designed to introduce the methodology of GOSTNets to novice python users; in order to properly implement these, users should have a basic knowledge of

- Python
- Jupyter notebooks
- Anaconda (for setup)
Expand Down
4 changes: 2 additions & 2 deletions Tutorials/Step 3 - Using your Graph.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -452,7 +452,7 @@
}
],
"source": [
"# the crs of churchs\n",
"# the crs of churches\n",
"churches.crs"
]
},
Expand Down Expand Up @@ -900,7 +900,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Now, We can calcuate the OD matrix using the GOSTNets calculate_OD function. Bear in mind it takes list objects as inputs:"
"Now, We can calculate the OD matrix using the GOSTNets calculate_OD function. Bear in mind it takes list objects as inputs:"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ dependencies = [
]

[project.optional-dependencies]
dev = ["pre-commit", "pytest", "pytest-cov", "GOStnets[docs,osm,opt]"]
dev = ["pre-commit", "pytest", "pytest-cov", "tomli", "GOStnets[docs,osm,opt]"]
docs = ["docutils==0.17.1", "jupyter-book>=1,<2"]
osm = ["gdal", "geopy", "boltons"]
opt = ["pulp"]
Expand All @@ -51,7 +51,7 @@ write_to = "src/GOSTnets/_version.py"
[tool.codespell]
skip = 'docs/_build,docs/references.bib,__pycache__,*.png,*.gz,*.whl'
ignore-regex = '^\s*"image\/png":\s.*'
ignore-words-list = "gost,"
ignore-words-list = "gost,nd,ans,chck,lenth"

[tool.ruff.pydocstyle]
convention = "numpy"
4 changes: 2 additions & 2 deletions src/GOSTnets/calculate_od_raw.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,8 @@ def calculateOD_gdf(
G (networkx graph): describes the road network. Often extracted using OSMNX
origins (geopandas dataframe): source locations for calculating access
destinations (geopandas dataframe): destination locations for calculating access
calculate_snap (boolean, optioinal): variable to add snapping distance to travel time, default is false
wgs84 (CRS dictionary, optional): CRS fo road network to which the GDFs are projected
calculate_snap (boolean, optional): variable to add snapping distance to travel time, default is false
wgs84 (CRS dictionary, optional): CRS of road network to which the GDFs are projected
Returns:
numpy array: 2d OD matrix with columns as index of origins and rows as index of destinations
"""
Expand Down
16 changes: 8 additions & 8 deletions src/GOSTnets/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ def edges_and_nodes_gdf_to_graph(
:param discard_node_col:
default is empty, all columns in the nodes_df will be copied to the nodes in the graph. If a list is filled, all the columns specified will be dropped.
:checks:
if True, will perfrom a validation checks and return the nodes_df with a 'node_in_edge_df' column
if True, will perform a validation checks and return the nodes_df with a 'node_in_edge_df' column
:add_missing_reflected_edges:
if contains a tag, then the oneway column is used to see whether reverse edges need to be added. This is much faster than using the add_missing_reflected_edges after a graph is already created.
:oneway_tag:
Expand Down Expand Up @@ -693,7 +693,7 @@ def generate_isochrones(G, origins, thresh, weight=None, stacking=False):
Function for generating isochrones from one or more graph nodes. Ensure any GeoDataFrames / graphs are in the same projection before using function, or pass a crs

:param G: a graph containing one or more nodes
:param orgins: a list of node IDs that the isochrones are to be generated from
:param origins: a list of node IDs that the isochrones are to be generated from
:param thresh: The time threshold for the calculation of the isochrone
:param weight: Name of edge weighting for calculating 'distances'. For isochrones, should be time expressed in seconds. Defaults to time expressed in seconds.
:param stacking: If True, returns number of origins that can be reached from that node. If false, max = 1
Expand Down Expand Up @@ -1243,7 +1243,7 @@ def assign_traffic_times(
:param verbose: Set to true to monitor progress of queries and notify if any queries failed, defaults to False
:param road_col: key for the road type in the edge data dictionary, defaults to 'infra_type'
:param id_col: key for the id in the edge data dictionary, defaults to 'id'
:returns: The original graph with two new data properties for the edges: 'mapbox_api' (a boolean set to True if the edge succesfuly received a trafic time value) and 'time_traffic' (travel time in seconds)
:returns: The original graph with two new data properties for the edges: 'mapbox_api' (a boolean set to True if the edge successfully received a traffic time value) and 'time_traffic' (travel time in seconds)
"""

import json
Expand All @@ -1264,7 +1264,7 @@ def first_val(x):
# print(edges_all[road_col][390:400])

print("print unique roads")
# may not of orginally worked because some fields can contain multiple road tags in a list. Ex. [motorway, trunk]. need to do pre-processing
# may not of originally worked because some fields can contain multiple road tags in a list. Ex. [motorway, trunk]. need to do pre-processing
print(edges_all[road_col].unique())

print("print accepted_road_types")
Expand Down Expand Up @@ -1512,7 +1512,7 @@ def gravity_demand(
G, origins, destinations, weight, maxtrips=100, dist_decay=1, fail_value=99999999999
):
"""
Function for generating a gravity-model based demand matrix. Note: 1 trip will always be returned between an origin and a destination, even if weighting would otherewise be 0.
Function for generating a gravity-model based demand matrix. Note: 1 trip will always be returned between an origin and a destination, even if weighting would otherwise be 0.
:param origins: a list of node IDs. Must be in G.
:param destinations: a list of node IDs Must be in G.
:param weight: the gravity weighting of the nodes in the model, e.g. population
Expand Down Expand Up @@ -3024,7 +3024,7 @@ def new_edge_generator(
nodes_to_add.append((v, node_data))
iterator += 1

# update the data dicionary for the new geometry
# update the data dictionary for the new geometry
UTM_geom = transform(project_WGS_UTM, passed_geom)
edge_data = {}
edge_data[geom_col] = passed_geom
Expand Down Expand Up @@ -3248,7 +3248,7 @@ def advanced_snap(
:param knn (int): k nearest neighbors to query for the nearest edge. Consider increasing this number up to 10 if the connection output is slightly unreasonable. But higher knn number will slow down the process.
:param measure_crs (int): preferred EPSG in meter units. Suggested to use the correct UTM projection.
:param factor: allows you to scale up / down unit of returned new_footway_edges if other than meters. Set to 1000 if length in km.
:return: G (graph): the original gdf with POIs and PPs appended and with connection edges appended and existing edges updated (if PPs are present)pois_meter (GeoDataFrame): gdf of the POIs along with extra columns, such as the associated nearest lines and PPs new_footway_edges (GeoDataFrame): gdf of the new footway edges that connect the POIs to the orginal graph
:return: G (graph): the original gdf with POIs and PPs appended and with connection edges appended and existing edges updated (if PPs are present)pois_meter (GeoDataFrame): gdf of the POIs along with extra columns, such as the associated nearest lines and PPs new_footway_edges (GeoDataFrame): gdf of the new footway edges that connect the POIs to the original graph
"""

import rtree
Expand Down Expand Up @@ -3506,7 +3506,7 @@ def update_edges(edges, new_lines, replace=True, nodes_meter=None, pois_meter=No
# do not add new edges if they are longer than the threshold or if the length equals 0, if the length equals 0 that means the poi was overlaying an edge itself, therefore no extra edge needs to be created
# unvalid_pos = np.where((new_edges['length'] > threshold) | (new_edges['length'] == 0))[0]
unvalid_new_edges = new_edges.iloc[unvalid_pos]
# print("print unvalid lines over threshold")
# print("print invalid lines over threshold")
# print(unvalid_new_edges)

print(f"node count before: {nodes_meter.count()[0]}")
Expand Down
6 changes: 3 additions & 3 deletions src/GOSTnets/fetch_od.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ def CreateODMatrix(
:param MB_Token: Mapbox private key if using the "MB" or "MBT" call types
"""

# Function for performing Euclidian distances.
# Function for performing Euclidean distances.
def EuclidCall(source_list, dest_list, source_points, dest_points):
distmatrix = np.zeros((len(source_points), len(dest_points)))
for s in range(0, len(source_points)):
Expand Down Expand Up @@ -403,7 +403,7 @@ def ReadMe(ffpath):
*** Optional - if sources =/= destinations. Note - Unique identifier and Pop column names must remain the same ***
-W Filename of destinations csv
*** Optional - if encountering server errors / internet connectivity instability ***
-R Save - input latest save number to pick up matrix construciton process from there.
-R Save - input latest save number to pick up matrix construction process from there.
-Z Rescue number parameter - If you have already re-started the download process, denote how many times. First run = 0, restarted once = 1...
Do NOT put column names or indeed any input inside quotation marks.
The only exceptions is if the file paths have spaces in them.
Expand All @@ -426,7 +426,7 @@ def ReadMe(ffpath):
python OD.py -all -s C:/Temp/sources.csv -d C:/Temp/destinations.csv -outputMA C:/Temp/MA_Res.csv -outputOD C:/Temp/OD.csv
"""
parser = argparse.ArgumentParser(
description="Calculate Origin Detination",
description="Calculate Origin Destination",
epilog=exampleText,
formatter_class=argparse.RawDescriptionHelpFormatter,
)
Expand Down
2 changes: 1 addition & 1 deletion src/GOSTnets/load_osm.py
Original file line number Diff line number Diff line change
Expand Up @@ -378,7 +378,7 @@ def get_all_intersections(
]
)

# Create one big list and treat all the cutted lines as unique lines
# Create one big list and treat all the cut lines as unique lines
flat_list = []
all_data = {}

Expand Down
Loading