Ingestion error
  • I am seeing this in the log when trying to ingest an NITF-message using the NITFparser in Superdesk






    p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
    span.s1 {font-variant-ligatures: no-common-ligatures}



    Error while parsing fishing-test-pure-iptc-1.xml: expected '}' before end of string level=ERROR process=ForkPoolWorker-1



  • 8 Comments sorted by
  • I am also seeing this:






    p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
    span.s1 {font-variant-ligatures: no-common-ligatures}



    default_attr can be used for simple child element (<class 'superdesk.io.feed_parsers.nitf.NITFFeedParser'>) level=ERROR process=ForkPoolWorker-1


    It looks as the message is being ingested - as it is being stored int he ingested folder, but I cannot find it in superdesk.

    I have removed rules and routings
    t

  • I have now also tried to create a local one - file feed. And I have created a desk called ingested. I created a routing saying that any messages here shall be moved to this desk. 

    I can now see the file being moved to the _PROCESSED folder, but I cannot see the message.

    What is it that I am doing wrong? :)

    I do see this in the log:






    p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Menlo; color: #000000; background-color: #ffffff}
    span.s1 {font-variant-ligatures: no-common-ligatures}



    default_attr can be used for simple child element (<class 'superdesk.io.feed_parsers.nitf.NITFFeedParser'>) level=ERROR process=ForkPoolWorker-1


    Jun 29 07:56:23 superdesk sh[16640]: 07:56:23 work.1 | default_attr can be used for simple child element (<class 'superdesk.io.feed_parsers.nitf.NITFFeedParser'>) level=ERROR process=ForkPoolWorker-1


    Jun 29 07:56:23 superdesk sh[16640]: 07:56:23 work.1 | Provider 5b35e5537411f541a8ed2859 updated level=INFO process=ForkPoolWorker-1


  • When I entered the edit window for ingest I noticed this in the log. Should I be worried?

    From the log:
    --- Logging error --- level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | Traceback (most recent call last): level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/superdesk/io/commands/update_ingest.py", line 438, in ingest_item
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | old_item = ingest_service.find_one(guid=item[GUID_FIELD], req=None) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/superdesk/services.py", line 85, in find_one
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | res = self.backend.find_one(self.datasource, req=req, **lookup) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/superdesk/eve_backend.py", line 40, in find_one
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | item_search = search_backend.find_one(endpoint_name, req=req, **lookup) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/eve_elastic/elastic.py", line 562, in find_one
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | hits = self.elastic(resource).search(body=query, **args) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/elasticsearch/client/utils.py", line 69, in _wrapped
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | return func(*args, params=params, **kwargs) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/elasticsearch/client/__init__.py", line 539, in search
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | doc_type, '_search'), params=params, body=body) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/elasticsearch/transport.py", line 327, in perform_request
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/elasticsearch/connection/http_urllib3.py", line 110, in perform_request
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | self._raise_error(response.status, raw_data) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/elasticsearch/connection/base.py", line 114, in _raise_error
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | elasticsearch.exceptions.RequestError: level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | During handling of the above exception, another exception occurred: level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | Traceback (most recent call last): level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/usr/lib/python3.5/logging/__init__.py", line 980, in emit
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | msg = self.format(record) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/usr/lib/python3.5/logging/__init__.py", line 830, in format
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | return fmt.format(record) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/celery/utils/log.py", line 154, in format
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | msg = logging.Formatter.format(self, record) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/usr/lib/python3.5/logging/__init__.py", line 567, in format
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | record.message = record.getMessage() level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/usr/lib/python3.5/logging/__init__.py", line 328, in getMessage
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | msg = str(self.msg) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/elasticsearch/exceptions.py", line 55, in __str__
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | cause = ', %r' % self.info['error']['root_cause'][0]['reason'] level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | TypeError: string indices must be integers level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | Call stack: level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/bin/celery", line 11, in
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | sys.exit(main()) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/celery/__main__.py", line 16, in main
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | _main() level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/celery/bin/celery.py", line 322, in main
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | cmd.execute_from_commandline(argv) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/celery/bin/celery.py", line 484, in execute_from_commandline
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | super(CeleryCommand, self).execute_from_commandline(argv))) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/celery/bin/base.py", line 275, in execute_from_commandline
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | return self.handle_argv(self.prog_name, argv[1:]) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/celery/bin/celery.py", line 476, in handle_argv
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | return self.execute(command, argv) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/celery/bin/celery.py", line 408, in execute
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | ).run_from_argv(self.prog_name, argv[1:], command=argv[0]) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/celery/bin/worker.py", line 223, in run_from_argv
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | return self(*args, **options) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/celery/bin/base.py", line 238, in __call__
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | ret = self.run(*args, **kwargs) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/celery/bin/worker.py", line 258, in run
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | worker.start() level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/celery/worker/worker.py", line 205, in start
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | self.blueprint.start(self) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/celery/bootsteps.py", line 119, in start
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | step.start(parent) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/celery/bootsteps.py", line 370, in start
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | return self.obj.start() level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/celery/concurrency/base.py", line 131, in start
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | self.on_start() level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/celery/concurrency/prefork.py", line 112, in on_start
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | **self.options) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/celery/concurrency/asynpool.py", line 421, in __init__
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | super(AsynPool, self).__init__(processes, *args, **kwargs) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/billiard/pool.py", line 1007, in __init__
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | self._create_worker_process(i) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/celery/concurrency/asynpool.py", line 438, in _create_worker_process
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | return super(AsynPool, self)._create_worker_process(i) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/billiard/pool.py", line 1116, in _create_worker_process
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | w.start() level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/billiard/process.py", line 124, in start
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | self._popen = self._Popen(self) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/billiard/context.py", line 333, in _Popen
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | return Popen(process_obj) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/billiard/popen_fork.py", line 24, in __init__
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | self._launch(process_obj) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/billiard/popen_fork.py", line 79, in _launch
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | code = process_obj._bootstrap() level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/billiard/process.py", line 327, in _bootstrap
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | self.run() level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/billiard/process.py", line 114, in run
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | self._target(*self._args, **self._kwargs) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/billiard/pool.py", line 289, in __call__
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | sys.exit(self.workloop(pid=pid)) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/billiard/pool.py", line 358, in workloop
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | result = (True, prepare_result(fun(*args, **kwargs))) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/celery/app/trace.py", line 540, in _fast_trace_task
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | uuid, args, kwargs, request, level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/celery/app/trace.py", line 375, in trace_task
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | R = retval = fun(*args, **kwargs) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/superdesk/celery_app.py", line 91, in __call__
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | return super().__call__(*args, **kwargs) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/celery/app/trace.py", line 632, in __protected_call__
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | return self.run(*args, **kwargs) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/superdesk/io/commands/update_ingest.py", line 224, in update_provider
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | ingest_items(items, provider, feeding_service, rule_set, routing_scheme) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/superdesk/io/commands/update_ingest.py", line 395, in ingest_items
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | routing_scheme=routing_scheme if not item[GUID_FIELD] in items_in_package else None) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | File "/opt/superdesk/env/lib/python3.5/site-packages/superdesk/io/commands/update_ingest.py", line 518, in ingest_item
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | logger.exception(ex) level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | Message: RequestError(400, 'SearchPhaseExecutionException[Failed to execute phase [query], all shards failed; shardFailures {[b7Up4-7JROaHPdUCpW-5mg][superdesk_2fd7e37a][0]: SearchParseException[[superdesk_2fd7e37a][0]: from[-1],size[-1]: Parse Failure [Failed to parse source [{"query": {"constant_score": {"filter": {"and": [{"term": {"guid": null}}]}}}}]]]; nested: QueryParsingException[[superdesk_2fd7e37a] No field specified for term filter]; }{[b7Up4-7JROaHPdUCpW-5mg][superdesk_2fd7e37a][1]: SearchParseException[[superdesk_2fd7e37a][1]: from[-1],size[-1]: Parse Failure [Failed to parse source [{"query": {"constant_score": {"filter": {"and": [{"term": {"guid": null}}]}}}}]]]; nested: QueryParsingException[[superdesk_2fd7e37a] No field specified for term filter]; }{[b7Up4-7JROaHPdUCpW-5mg][superdesk_2fd7e37a][2]: SearchParseException[[superdesk_2fd7e37a][2]: from[-1],size[-1]: Parse Failure [Failed to parse source [{"query": {"constant_score": {"filter": {"and": [{"term": {"guid": null}}]}}}}]]]; nested: QueryParsingException[[superdesk_2fd7e37a] No field specified for term filter]; }{[b7Up4-7JROaHPdUCpW-5mg][superdesk_2fd7e37a][3]: SearchParseException[[superdesk_2fd7e37a][3]: from[-1],size[-1]: Parse Failure [Failed to parse source [{"query": {"constant_score": {"filter": {"and": [{"term": {"guid": null}}]}}}}]]]; nested: QueryParsingException[[superdesk_2fd7e37a] No field specified for term filter]; }{[b7Up4-7JROaHPdUCpW-5mg][superdesk_2fd7e37a][4]: SearchParseException[[superdesk_2fd7e37a][4]: from[-1],size[-1]: Parse Failure [Failed to parse source [{"query": {"constant_score": {"filter": {"and": [{"term": {"guid": null}}]}}}}]]]; nested: QueryParsingException[[superdesk_2fd7e37a] No field specified for term filter]; }]', {'error': 'SearchPhaseExecutionException[Failed to execute phase [query], all shards failed; shardFailures {[b7Up4-7JROaHPdUCpW-5mg][superdesk_2fd7e37a][0]: SearchParseException[[superdesk_2fd7e37a][0]: from[-1],size[-1]: Parse Failure [Failed to parse source [{"query": {
    Jun 29 09:06:14 superdesk sh[16640]: "constant_score": {"filter": {"and": [{"term": {"guid": null}}]}}}}]]]; nested: QueryParsingException[[superdesk_2fd7e37a] No field specified for term filter]; }{[b7Up4-7JROaHPdUCpW-5mg][superdesk_2fd7e37a][1]: SearchParseException[[superdesk_2fd7e37a][1]: from[-1],size[-1]: Parse Failure [Failed to parse source [{"query": {"constant_score": {"filter": {"and": [{"term": {"guid": null}}]}}}}]]]; nested: QueryParsingException[[superdesk_2fd7e37a] No field specified for term filter]; }{[b7Up4-7JROaHPdUCpW-5mg][superdesk_2fd7e37a][2]: SearchParseException[[superdesk_2fd7e37a][2]: from[-1],size[-1]: Parse Failure [Failed to parse source [{"query": {"constant_score": {"filter": {"and": [{"term": {"guid": null}}]}}}}]]]; nested: QueryParsingException[[superdesk_2fd7e37a] No field specified for term filter]; }{[b7Up4-7JROaHPdUCpW-5mg][superdesk_2fd7e37a][3]: SearchParseException[[superdesk_2fd7e37a][3]: from[-1],size[-1]: Parse Failure [Failed to parse source [{"query": {"constant_score": {"filter": {"and": [{"term": {"guid": null}}]}}}}]]]; nested: QueryParsingException[[superdesk_2fd7e37a] No field specified for term filter]; }{[b7Up4-7JROaHPdUCpW-5mg][superdesk_2fd7e37a][4]: SearchParseException[[superdesk_2fd7e37a][4]: from[-1],size[-1]: Parse Failure [Failed to parse source [{"query": {"constant_score": {"filter": {"and": [{"term": {"guid": null}}]}}}}]]]; nested: QueryParsingException[[superdesk_2fd7e37a] No field specified for term filter]; }]', 'status': 400})
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | Arguments: () level=WARNING process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | Failed to ingest the following items: {None} level=ERROR process=ForkPoolWorker-1
    Jun 29 09:06:14 superdesk sh[16640]: 11:06:14 work.1 | Provider 5b35e5537411f541a8ed2859 updated level=INFO process=ForkPoolWorker-1
  • seems like guid doesn't get populated during ingest, it should use head/docdata/doc-id/@id-string,
    is it in the file?
  • seems like guid doesn't get populated during ingest, it should use head/docdata/doc-id/@id-string,
    is it in the file?
  • I'll check
  • We can set this issue to solved. you were correct, the id-string was not there

    This is a docdata structure Superdesk will ingest:






    p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px Helvetica; color: #ab4500}
    span.s1 {color: #021da7}
    span.s2 {color: #000000}
    span.s3 {color: #f9975e}
    span.s4 {color: #ff9450}



    <docdata>

               
    <doc-id regsrc="SCRPE" id-string="cedd5bfb-9b10-4c31-b2f9-2e740309b618"/>

               
    <date.issue norm="20180629T040619+0200"/>

               
    <doc.copyright holder="SCRPE" year="2018"/>

               
    <du-key version="0" key="slug-test-ingest"/>

           
    </docdata>



  • Once a Gateway has been successfully configured to publish to Gateway Hub and all the schemas have been provided, the final step is to ensure that the data received in Gateway Hub matches the schemas provided tellsubway
    Post edited by Sanchez at 2019-11-13 02:03:10