A SPRING AMR service and client¶
A client and server that generates AMR graphs from natural language sentences. This repository has a Docker that compiles the AMR SPRING parser using the original settings of the authors.
The Docker image was created because this parser has a very particular set of
Python dependencies that do not compile under some platforms. Specifically
tokenizers==0.7.0 fails to compile from source on a pip install because of
a Rust (version) compiler misconfiguration.
The Docker image provides a very simple service written in [Flask] that uses the SPRING model for inferencing and returns the parsed AMRs.
Features:
Parse natural language sentence (batched) into AMRs.
Results cached in memory or an SQLite database.
Both a command line and Pythonic object oriented client API.
Installing¶
First install the client:
pip3 install zensols.amrspring
You can run the server locally, but it is far easier and faster to use and pull the docker image (see the docker image section). In the unlikely case you want to be the docker image yourself, see the docker build instructions.
Docker Image¶
There is a docker image with all dependencies needed to run except the models public BART checkpoint (automatically downloaded), and the SPRING model (you supply this).
To start the server from a Docker container:
Clone this repo:
git clone https://github.com/plandes/amrspringSet the working directory:
cd amrspringDownload the model(s) from the AMR SPRING parser repository.
Pull the image:
docker pull plandes/springservChange the working directory to the docker source tree:
cd dockerAdd the model:
mkdir models && mv your-spring-model.pt models/model.ptStart the container using the docker compose:
docker-compose up -d
Note: The server takes a long time to come up the first time because it will download the BART base model in the models directory. See the usage section on how to test the server.
Server¶
To build a local server:
Clone this repo:
git clone https://github.com/plandes/amrspringSet the working directory:
cd amrspringBuild out the server:
src/bin/build-server.sh <python installation directory>Start it
( cd server ; ./serverctl start )Test it
( cd server ; ./serverctl test-server )Stop it
( cd server ; ./serverctl top )
Usage¶
The package can be used from the command line or directly via a Python API.
You can use a combination UNIX tools to POST directly to it:
wget -q -O - --post-data='{"sents": ["Obama was the 44th president."]}' \
--header='Content-Type:application/json' \
'http://localhost:8080/parse' | jq -r '.amrs."0"."graph"'
The output should be:
# ::snt Obama was the 44th president.
(z0 / person
:ord (z1 / ordinal-entity
:value 44)
:ARG0-of (z2 / have-org-role-91
:ARG2 (z3 / president))
:domain (z4 / person
:name (z5 / name
:op1 "Obama")))
It also offers a command line:
$ amrspring --level warn parse 'Obama was the president.'
sent: Obama was the president.
graph:
# ::snt Obama was the president.
(z0 / person
:ARG0-of (z1 / have-org-role-91
:ARG2 (z2 / president))
:domain (z3 / person
:name (z4 / name
:op1 "Obama")))
The Python API is very straight forward as well:
>>> from zensols.amrspring import AmrPrediction, ApplicationFactory
>>> client = ApplicationFactory.get_client()
>>> pred = tuple(client.parse(['Obama was the president.']))[0]
2024-02-19 19:41:03,659 parsed 1 sentences in 3ms
>>> print(pred.graph)
# ::snt Obama was the president.
(z0 / person
:ARG0-of (z1 / have-org-role-91
:ARG2 (z2 / president))
:domain (z3 / person
:name (z4 / name
:op1 "Obama")))
Documentation¶
See the full documentation. The API reference is also available.
Changelog¶
An extensive changelog is available here.
Citation¶
This package and the docker image uses the original AMR SPRING parser source code base from the paper “One SPRING to Rule Them Both: Symmetric AMR Semantic Parsing and Generation without a Complex Pipeline”:
@inproceedings{bevilacquaOneSPRINGRule2021,
title = {One {{SPRING}} to {{Rule Them Both}}: {{Symmetric AMR Semantic Parsing}} and {{Generation}} without a {{Complex Pipeline}}},
shorttitle = {One {{SPRING}} to {{Rule Them Both}}},
booktitle = {Proceedings of the {{AAAI Conference}} on {{Artificial Intelligence}}},
author = {Bevilacqua, Michele and Blloshmi, Rexhina and Navigli, Roberto},
date = {2021-05-18},
volume = {35},
number = {14},
pages = {12564--12573},
location = {Virtual},
url = {https://ojs.aaai.org/index.php/AAAI/article/view/17489},
urldate = {2022-07-28}
}
License¶
Copyright (c) 2024 - 2025 Paul Landes