A SPRING AMR service and client#
A client and server that generates AMR graphs from natural language sentences. This repository has a Docker that compiles the AMR SPRING parser using the original settings of the authors.
The Docker image was created because this parser has a very particular set of
Python dependencies that do not compile under some platforms. Specifically
tokenizers==0.7.0
fails to compile from source on a pip
install because of
a Rust (version) compiler misconfiguration.
The Docker image provides a very simple service written in [Flask] that uses the SPRING model for inferencing and returns the parsed AMRs.
Features:
Parse natural language sentence (batched) into AMRs.
Results cached in memory or an SQLite database.
Both a command line and Pythonic object oriented client API.
Installing#
First install the client:
pip3 install zensols.amrspring
Server#
There is a script to build a local server, but there is also a docker image.
To build a local server:
Clone this repo:
git clone https://github.com/plandes/amrspring
Working directory:
cd amrspring
Build out the server:
src/bin/build-server.sh <python installation directory>
Start it
( cd server ; ./serverctl start )
Test it
( cd server ; ./serverctl test-server )
Stop it
( cd server ; ./serverctl top )
Docker#
To build the Docker image:
Download the model(s) from the AMR SPRING parser repository.
Build the image:
cd docker ; make build
Check for errors.
Start the image:
make up
Test using a method from usage.
Of course, the server code can be run without docker by cloning the AMR SPRING parser repository and adding the server code. See the Dockerfile for more information on how to do that.
Usage#
The package can be used from the command line or directly via a Python API.
You can use a combination UNIX tools to POST
directly to it:
wget -q -O - --post-data='{"sents": ["Obama was the 44th president."]}' \
--header='Content-Type:application/json' \
'http://localhost:8080/parse' | jq -r '.amrs."0"."graph"'
# ::snt Obama was the 44th president.
(z0 / person
:ord (z1 / ordinal-entity
:value 44)
:ARG0-of (z2 / have-org-role-91
:ARG2 (z3 / president))
:domain (z4 / person
:name (z5 / name
:op1 "Obama")))
It also offers a command line:
$ amrspring --level warn parse 'Obama was the president.'
sent: Obama was the president.
graph:
# ::snt Obama was the president.
(z0 / person
:ARG0-of (z1 / have-org-role-91
:ARG2 (z2 / president))
:domain (z3 / person
:name (z4 / name
:op1 "Obama")))
The Python API is very straight forward as well:
>>> from zensols.amrspring import AmrPrediction, ApplicationFactory
>>> client = ApplicationFactory.get_client()
>>> pred = tuple(client.parse(['Obama was the president.']))[0]
2024-02-19 19:41:03,659 parsed 1 sentences in 3ms
>>> print(pred.graph)
# ::snt Obama was the president.
(z0 / person
:ARG0-of (z1 / have-org-role-91
:ARG2 (z2 / president))
:domain (z3 / person
:name (z4 / name
:op1 "Obama")))
Documentation#
See the full documentation. The API reference is also available.
Changelog#
An extensive changelog is available here.
Citation#
This package and the docker image uses the original AMR SPRING parser source code base from the paper “One SPRING to Rule Them Both: Symmetric AMR Semantic Parsing and Generation without a Complex Pipeline”:
@inproceedings{bevilacquaOneSPRINGRule2021,
title = {One {{SPRING}} to {{Rule Them Both}}: {{Symmetric AMR Semantic Parsing}} and {{Generation}} without a {{Complex Pipeline}}},
shorttitle = {One {{SPRING}} to {{Rule Them Both}}},
booktitle = {Proceedings of the {{AAAI Conference}} on {{Artificial Intelligence}}},
author = {Bevilacqua, Michele and Blloshmi, Rexhina and Navigli, Roberto},
date = {2021-05-18},
volume = {35},
number = {14},
pages = {12564--12573},
location = {Virtual},
url = {https://ojs.aaai.org/index.php/AAAI/article/view/17489},
urldate = {2022-07-28}
}
License#
Copyright (c) 2024 Paul Landes