How do I solve the error on scrapy.clawer

Now, I develop web app with django and scrapy that has function of searching English word. When I input word with form on django, scrapy get the mean of transltated word into Japanese from internet dictionary.

After inputting word ,activating scrapy and stop runserver, happning the error following:

[scrapy.crawler] INFO: Received SIGINT, shutting down gracefully. Send again to force
Error: That port is already in use.

I can stop it by using command kill , but it is inconvinient to input command everytime happens error. So I want to know how to solve it like adding code etc.

following is my code .

def add_venue(request):
    submitted = False
    if request.method =='POST':
        form = VenueForm(request.POST)
        if form.is_valid():
            name = form.cleaned_data['word']#入力した単語
            queryset = Newword.objects.filter(word = name)
            if queryset.exists():
                process = CrawlerProcess({
                'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'

                process.crawl(WordSpider, query=name)#入力した単語をnameに入れて、スクレイピングを実行。
                process.start() # the script will block here until the crawling is finished
            return HttpResponseRedirect('/list/')
        form = VenueForm
        if 'submitted' in request.GET:
            submitted = True
    return render(request, 'form.html', {'form':form, 'submitted':submitted})
python3 runserver --nothreading --noreload
class WordSpider(scrapy.Spider):
    name = 'word'
    allowed_domains = ['']
    # start_urls = ['']

    def __init__(self, query='', *args, **kwargs):
        super(WordSpider, self).__init__(*args, **kwargs)
        #self.user_agent = 'custom-user-agent'
        self.start_urls = ['' + query]

    def parse(self, response):
        # item=ElscrapyItem()
        # item['word']=response.xpath('//*[@id="summary"]/div[2]/p/span[2]/text()').get().replace('\n', '').strip()
        # yield item
        word=response.xpath('//*[@id="summary"]/div[2]/p/span[2]/text()').get().replace('\n', '').strip()
        #loader = ItemLoader(item = ElscrapyItem(), response=response)
        #loader.add_xpath('word', '//*[@id="summary"]/div[2]/p/span[2]/text()')
        #yield loader.load_item()

django project is in scrapy project.

It's not a great idea to start a whole Twisted reactor, thread pool, and everything CrawlerProcess entails in your request.

You don't need Scrapy at all to parse a single page; use requests and bs4 (BeautifulSoup) instead.

import requests
import bs4

def get_word(query: str) -> str:
    resp = requests.get(f'{query}', headers={
        'User-Agent': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
    soup = bs4.BeautifulSoup(resp.text, 'html.parser')
    return soup.find(class_='content-explanation').get_text(strip=True)


prints out


so plugging that into your view (and simplifying it a bit):

def add_venue(request):
    submitted = ('submitted' in request.GET)
    if request.method == 'POST':
        form = VenueForm(request.POST)
        if form.is_valid():
            word = form.cleaned_data['word']
            if not Newword.objects.filter(word=word).exists():
                explanation = get_word(word)
                # (do something with `explanation` probably?)
            return HttpResponseRedirect('/list/')
        form = VenueForm()
    return render(request, 'form.html', {'form': form, 'submitted': submitted})
Back to Top