Regarding use of LS...

Clear all

# Question Regarding use of LSR model

10 Posts
2 Users
0 Likes
51 Views
Posts: 7
Member
Topic starter
(@csseow)
Active Member
Joined: 4 weeks ago

Hello, may I know how i can combine the LSR relation extraction model together with the allenNLP coreference resolution and NER model? I am having difficulty understanding the format of the instance to be passed into the LSR preprocessor and confused about where the other 2 models come into the pipeline to output the final list.

May I also know if and how I will have to utilise the DocRED json files to train the model/ output the data? Any help would be greatly appreciated. Thank you in advance.

9 Replies
Posts: 23
AISG Staff
(@raymond_aisg)
Eminent Member
Joined: 4 months ago

Hi @csseow,

Unfortunately, I can't say for sure how the Allennlp Coreference resolution and NER model predictions could fit into the DocRED input.

With regards to the DocRED format, the format details are covered in this paper,

The implementation details of the DocRED input format are described in this README file,

Regarding training the model, you can pass the json files to the trainer via the --train_file args and specify the --output_dir args to indicate the folder for the training metrics as well as the saved trained weights.

Hope this helps.

8 Replies
Member
(@csseow)
Joined: 4 weeks ago

Active Member
Posts: 7

@raymond_aisg Hi raymond, thank you for the prompt and helpful response. However, Im sorry to say that I am still confused about the input instance dictionary and exactly what I need to pass into the preprocessor of the LSR model. Would I even need to use the 2 allennlp models as mentioned in the LSR model card? Or are those the models I need to use in order to produce the instance input needed for the LSR preprocessor? Thank you very much.

AISG Staff
(@raymond_aisg)
Joined: 4 months ago

Eminent Member
Posts: 23

My apologies, I misunderstood your question.

As mentioned in the model card, the LSR model by itself does not include the capabilities for performing coreference resolution and named entity recognition required for input to the LSR's preprocessor. As such, in order to build a functioning demo, we utilize the pre-trained NER and Coref models from Allennlp to generate predictions required for the DocRED input format.

Please note that it is also possible to utilize other Coref and NER models as long as their prediction output is post-processed to meet the DocRED format.

For an example of how the Allennlp Coref and NER predictions are utilized, we have a Text2DocRED pipeline initialize the Coref and NER model from Allennlp, and their inference call is performed at the predict API endpoint here,

The preprocess method of the Text2DocRED pipeline parses the prediction output of both the Coref and NER models to fit the DocRED format before returning it back to the prediction endpoint. If a different Coref and NER model is used, both prediction output parsers will need to be updated.

Member
(@csseow)
Joined: 4 weeks ago

Active Member
Posts: 7

@raymond_aisg Hi raymond,

Thank you very much for you response. It has been extremely beneficial in helping me implement the AI brick in my project.

Moving from that however, I have found an optimisation problem related to my use of the text to docred pipeline, specifically in passing in document data through both the allenNLP models in VSCode. When I ran the text strings input in both the sgnlp and allennlp docs website, the output was returned to me almost instantly. However, when I ran the predict function of the allennlp coref and ner predictors in my python file in vscode, it will take up to 20seconds to give me the desired output. Below are the messages printed in the terminal as i ran the program:

Some weights of BertModel were not initialized from the model checkpoint at SpanBERT/spanbert-large-cased and are newly initialized: ['bert.pooler.dense.bias', 'bert.pooler.dense.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

May I know how I can optimise the processes of this file such that i can run the alllennlp model prediction and hence the text to docred pipeline up to speed as the one in the demo web app? Thank you very much.

AISG Staff
(@raymond_aisg)
Joined: 4 months ago

Eminent Member
Posts: 23

@csseow Hi,

Judging from the inference speed you've mentioned, I'm assuming you are performing the inferences with CPU on a laptop or desktop meaning the models are loaded onto the system RAM. The performance you are experiencing is typical for consumer-grade computers.

For SGnlp demo, the models are deployed on the cloud using server-grade hardware, usually with multiple CPU cores dedicated solely for performing inference. I can't say for sure what hardware Allennlp demo uses to deploy their model, but it is typical (and recommended) to use GPU to perform inference for applications that requires multiple concurrent inferences (which is a must given the amount of traffic Allennlp demo webpages received).

To speed up the inference speed for a Deep Learning model, you would require GPU hardware as GPUs are designed for parallel processing and optimized for vector operations, thus being able to significantly speed up Deep Learning model's inference and training speed.

A simple explanation on why GPU is needed for Deep Learning,

If you wish to optimize the models for a little bit more inference speed performance, you can also try looking at techniques like Dynamic Quantization or Pruning. Please note that these techniques usually come with the tradeoff of lower model accuracy for higher latency reductions and may require re-training the model.

Hope this helps.

Member
(@csseow)
Joined: 4 weeks ago

Active Member
Posts: 7

@raymond_aisg Hi raymond,

Thank you for the very detailed explanation in addressing my problem. I have checked my PC and it actually contains both CPU and a nvidia rtx 3060ti gpu. May I know how to check which processing unit on my computer is the model using to perform inferences? And if it is using my cpu, how I would be able to use my gpu to perform inferences instead? I am running my code on vscode instead of jupyter notebook or google colab if it helps. Thank you.

AISG Staff
(@raymond_aisg)
Joined: 4 months ago

Eminent Member
Posts: 23

@csseow Hi,

To use GPU for running the models, there are a few setup to be done first,

1. Setup CUDA

Follow this guide to setup the CUDA installation onto your PC. The CUDA toolkit allows computation to make use of the cores on the GPU.

After this step, you should be able to run the nvidia-smi command on your terminal to view your GPU.

For example, the following shows the output of the nvidia-smi command showing an Nvidia Tesla V100 GPU setup with CUDA version 11.7.

2. Update Pytorch version according to your CUDA version

Go to https://pytorch.org/ and select the config which correspond to the setup on your PC and install the correct Pytorch package

3. Test that the PyTorch on your environment is able to detect your GPU via the following example, if the CUDA installation is setup properly and the PyTorch version is installed correctly, you should see the is_available() method returns True and the device_count() method returns 1.



import torch

print(torch.cuda.is_available())

print(torch.cuda.device_count())



4. Since the SGnlp package by default does not utilize GPU, there is a few places which needs to be updated.

First, for the Coref and NER models, you need to specify the CUDA device when the models are initalized.

According to Allennlp documentation, you can define the CUDA device via the cuda_device argument for the from_path method. If you have only one GPU on your PC, you should set this value to 0, by default this value is -1 which indicate CPU should be used.

Next is the LSR model, since the LSR model utilize the HuggingFace Transformers package, the model object is essentially a PyTorch module, hence you can simply use the .to("cuda") method to move the model onto the GPU.

Example from the model usage code,





Try running the inference example code again to check for the inference speed up.

P.S.: Moving all 3 models to the GPU requires all 3 models to reside on the GPU vRAM, 3060Ti only has 8 GB of vRAM which might not be sufficient to load all 3 models. If you encounter out of memory issues, try loading only 1 or 2 models on the GPU to see if they fit. You can check the model memory consumption using the same nvidia-smi command shown above to see how much memory is consumed on the GPU.

Hope this helps.

Member
(@csseow)
Joined: 4 weeks ago

Active Member
Posts: 7

@raymond_aisg Hi raymond,

It turns out the file took so long run mainly because of the loading of the allennlp models, although i have changed the models to run on my gpu already. Now the inference takes only a few seconds!

Thank you so much for all the detailed replies and the wonderful help that you have given me for the past few days.  I truly do appreciate the time and effort you took to assist me with my understanding.

AISG Staff
(@raymond_aisg)
Joined: 4 months ago

Eminent Member
Posts: 23

No problem at all, thank you for trying out SGnlp. 😀

Share: