PipelineResult
- class PipelineResult(random_seed, model, training, training_loop, losses, metric_results, train_seconds, evaluate_seconds, stopper=None, metadata=<factory>, version=<factory>, git_hash=<factory>)[source]
Bases:
pykeen.utils.Result
A dataclass containing the results of running
pykeen.pipeline.pipeline()
.Attributes Summary
An early stopper
The title of the experiment.
Methods Summary
get_metric
(key)Get the given metric out of the metric result object.
plot
(**kwargs)Plot all plots.
plot_early_stopping
(**kwargs)Plot the evaluations during early stopping.
plot_er
(**kwargs)Plot the reduced entities and relation vectors in 2D.
plot_losses
(**kwargs)Plot the losses per epoch.
save_model
(path)Save the trained model to the given path using
torch.save()
.save_to_directory
(directory, *[, ...])Save all artifacts in the given directory.
save_to_ftp
(directory, ftp)Save all artifacts to the given directory in the FTP server.
save_to_s3
(directory, bucket[, s3])Save all artifacts to the given directory in an S3 Bucket.
Attributes Documentation
- stopper: Optional[pykeen.stoppers.stopper.Stopper] = None
An early stopper
Methods Documentation
- plot(**kwargs)[source]
Plot all plots.
- Parameters
kwargs – The keyword arguments passed to
pykeen.pipeline_plot.plot()
- Returns
The axis
- plot_early_stopping(**kwargs)[source]
Plot the evaluations during early stopping.
- Parameters
kwargs – The keyword arguments passed to
pykeen.pipeline.plot_utils.plot_early_stopping()
- Returns
The axis
- plot_er(**kwargs)[source]
Plot the reduced entities and relation vectors in 2D.
- Parameters
kwargs – The keyword arguments passed to
pykeen.pipeline.plot_utils.plot_er()
- Returns
The axis
Warning
Plotting relations and entities on the same plot is only meaningful for translational distance models like TransE.
- plot_losses(**kwargs)[source]
Plot the losses per epoch.
- Parameters
kwargs – The keyword arguments passed to
pykeen.pipeline.plot_utils.plot_losses()
.- Returns
The axis
- save_model(path)[source]
Save the trained model to the given path using
torch.save()
.- Parameters
path (
Union
[str
,Path
]) – The path to which the model is saved. Should have an extension appropriate for a pickle, like *.pkl or *.pickle.
The model contains within it the triples factory that was used for training.
- Return type
- save_to_directory(directory, *, save_metadata=True, save_replicates=True, **_kwargs)[source]
Save all artifacts in the given directory.
- Return type
- save_to_ftp(directory, ftp)[source]
Save all artifacts to the given directory in the FTP server.
- Parameters
The following code will train a model and upload it to FTP using Python’s builtin
ftplib.FTP
:import ftplib from pykeen.pipeline import pipeline directory = 'test/test' pipeline_result = pipeline( model='TransE', dataset='Kinships', ) with ftplib.FTP(host='0.0.0.0', user='user', passwd='12345') as ftp: pipeline_result.save_to_ftp(directory, ftp)
If you want to try this with your own local server, run this code based on the example from Giampaolo Rodola’s excellent library, pyftpdlib.
import os from pyftpdlib.authorizers import DummyAuthorizer from pyftpdlib.handlers import FTPHandler from pyftpdlib.servers import FTPServer authorizer = DummyAuthorizer() authorizer.add_user("user", "12345", homedir=os.path.expanduser('~/ftp'), perm="elradfmwMT") handler = FTPHandler handler.authorizer = authorizer address = '0.0.0.0', 21 server = FTPServer(address, handler) server.serve_forever()
- Return type
- save_to_s3(directory, bucket, s3=None)[source]
Save all artifacts to the given directory in an S3 Bucket.
- Parameters
directory (
str
) – The directory in the S3 bucketbucket (
str
) – The name of the S3 buckets3 – A client from
boto3.client()
, if already instantiated
Note
Need to have
~/.aws/credentials
file set up. Read: https://realpython.com/python-boto3-aws-s3/The following code will train a model and upload it to S3 using
boto3
:import time from pykeen.pipeline import pipeline pipeline_result = pipeline( dataset='Kinships', model='TransE', ) directory = f'tests/{time.strftime("%Y-%m-%d-%H%M%S")}' bucket = 'pykeen' pipeline_result.save_to_s3(directory, bucket=bucket)
- Return type