)]}'
{"specs/kilo/driver-periodic-tasks.rst":[{"author":{"_account_id":6618,"name":"Ruby Loo","email":"opensrloo@gmail.com","username":"rloo"},"change_message_id":"06f87fa6c01e2d0a67c38cb2e55f91f8f6d2fd18","unresolved":false,"context_lines":[{"line_number":17,"context_line":""},{"line_number":18,"context_line":"Currently Ironic conductor can run periodic tasks in a green thread. However,"},{"line_number":19,"context_line":"if some driver requires a driver-specific task to be run, it needs to patch"},{"line_number":20,"context_line":"conductor manager, which is not acceptable."},{"line_number":21,"context_line":""},{"line_number":22,"context_line":"Proposed change"},{"line_number":23,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"}],"source_content_type":"text/x-rst","patch_set":1,"id":"5a890539_52aaa871","line":20,"updated":"2014-12-02 16:02:29.000000000","message":"For example?","commit_id":"75be6967f55dedf538012dd8162f41146ed48796"},{"author":{"_account_id":10343,"name":"Jim Rollenhagen","email":"jim@jimrollenhagen.com","username":"jimrollenhagen"},"change_message_id":"de85640d30b0af29cb2c8a92dbc3c2eb983e1a7e","unresolved":false,"context_lines":[{"line_number":17,"context_line":""},{"line_number":18,"context_line":"Currently Ironic conductor can run periodic tasks in a green thread. However,"},{"line_number":19,"context_line":"if some driver requires a driver-specific task to be run, it needs to patch"},{"line_number":20,"context_line":"conductor manager, which is not acceptable."},{"line_number":21,"context_line":""},{"line_number":22,"context_line":"Proposed change"},{"line_number":23,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"}],"source_content_type":"text/x-rst","patch_set":1,"id":"3a961159_21464ac1","line":20,"in_reply_to":"5a890539_52aaa871","updated":"2015-01-10 02:32:23.000000000","message":"+1. I\u0027ve thought about something like this to check for dead ramdisks with the long-running ramdisks spec, however I\u0027d like to hear more about Dmitry\u0027s use case.","commit_id":"75be6967f55dedf538012dd8162f41146ed48796"},{"author":{"_account_id":2889,"name":"Aeva Black","email":"aeva.online@gmail.com","username":"tenbrae"},"change_message_id":"8eb555412e7da8d953591fba21cbaca9d81e3eae","unresolved":false,"context_lines":[{"line_number":23,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"},{"line_number":24,"context_line":""},{"line_number":25,"context_line":"* Modify ``ConductorManager.__init__`` to collect periodic tasks from each"},{"line_number":26,"context_line":"  present interface of each driver. It should use existing markers added by"},{"line_number":27,"context_line":"  ``@periodic_task.periodic_task`` to a method to detect periodic task."},{"line_number":28,"context_line":"  Information about a periodic tasks should be placed in ``_periodic_spacing``"},{"line_number":29,"context_line":"  ``_periodic_last_run`` and ``_periodic_tasks`` attributes of the conductor."}],"source_content_type":"text/x-rst","patch_set":1,"id":"5a890539_8b50d25f","line":26,"updated":"2014-11-19 23:59:37.000000000","message":"It appears that this puts ALL the periodic tasks into a single thread.\n\nIf possible, I would prefer to put each driver\u0027s periodic task into its own thread, so that, in the worst case, a rogue driver can not break other drivers OR the main conductor periodic task thread.\n\nEDIT: oh, you cover that later on :)","commit_id":"75be6967f55dedf538012dd8162f41146ed48796"},{"author":{"_account_id":6618,"name":"Ruby Loo","email":"opensrloo@gmail.com","username":"rloo"},"change_message_id":"06f87fa6c01e2d0a67c38cb2e55f91f8f6d2fd18","unresolved":false,"context_lines":[{"line_number":58,"context_line":"Driver API impact"},{"line_number":59,"context_line":"-----------------"},{"line_number":60,"context_line":""},{"line_number":61,"context_line":"No impact for the dirver API itself."},{"line_number":62,"context_line":""},{"line_number":63,"context_line":"Nova driver impact"},{"line_number":64,"context_line":"------------------"}],"source_content_type":"text/x-rst","patch_set":1,"id":"5a890539_52f9c84c","line":61,"updated":"2014-12-02 16:02:29.000000000","message":"nit: \u0027dirver\u0027 -\u003e \u0027driver\u0027","commit_id":"75be6967f55dedf538012dd8162f41146ed48796"},{"author":{"_account_id":5805,"name":"Chris Krelle","email":"nobodycam@gmail.com","username":"nobodycam"},"change_message_id":"51161706b1c8809d41cbe78175c6e718ed97831e","unresolved":false,"context_lines":[{"line_number":28,"context_line":"  to poll status information from BMC."},{"line_number":29,"context_line":""},{"line_number":30,"context_line":"* For deploy drivers supporting long-running ramdisks a driver-specific"},{"line_number":31,"context_line":"  periodic task may be used to poll for dead ramdisks."},{"line_number":32,"context_line":""},{"line_number":33,"context_line":".. _in-band inspection using discoverd: http://specs.openstack.org/openstack/ironic-specs/specs/kilo/inband-properties-discovery.html"},{"line_number":34,"context_line":""}],"source_content_type":"text/x-rst","patch_set":3,"id":"3a961159_55ffb700","line":31,"updated":"2015-01-16 23:54:45.000000000","message":"does dead ramdisk \u003d\u003d hung nodes in the AVAILABLE state?","commit_id":"d822898a1b58661ef022bfa43be8abc8409aa369"},{"author":{"_account_id":10239,"name":"Dmitry Tantsur","email":"dtantsur@protonmail.com","username":"dtantsur"},"change_message_id":"1512887f5353967c51d747f0559b6cc44aeda360","unresolved":false,"context_lines":[{"line_number":28,"context_line":"  to poll status information from BMC."},{"line_number":29,"context_line":""},{"line_number":30,"context_line":"* For deploy drivers supporting long-running ramdisks a driver-specific"},{"line_number":31,"context_line":"  periodic task may be used to poll for dead ramdisks."},{"line_number":32,"context_line":""},{"line_number":33,"context_line":".. _in-band inspection using discoverd: http://specs.openstack.org/openstack/ironic-specs/specs/kilo/inband-properties-discovery.html"},{"line_number":34,"context_line":""}],"source_content_type":"text/x-rst","patch_set":3,"id":"3a961159_b6760f3d","line":31,"in_reply_to":"3a961159_55ffb700","updated":"2015-01-19 09:20:22.000000000","message":"oh sorry, that\u0027s idea from one of J\u0027s and it\u0027s about IPA. I\u0027ll clarify (to my best understanding)","commit_id":"d822898a1b58661ef022bfa43be8abc8409aa369"},{"author":{"_account_id":5805,"name":"Chris Krelle","email":"nobodycam@gmail.com","username":"nobodycam"},"change_message_id":"51161706b1c8809d41cbe78175c6e718ed97831e","unresolved":false,"context_lines":[{"line_number":127,"context_line":"Other deployer impact"},{"line_number":128,"context_line":"---------------------"},{"line_number":129,"context_line":""},{"line_number":130,"context_line":"None"},{"line_number":131,"context_line":""},{"line_number":132,"context_line":"Developer impact"},{"line_number":133,"context_line":"----------------"}],"source_content_type":"text/x-rst","patch_set":3,"id":"3a961159_f5d7c34c","line":130,"updated":"2015-01-16 23:54:45.000000000","message":"will this have any impact on : periodic_max_workers or rpc_thread_pool_size settings in conf file?","commit_id":"d822898a1b58661ef022bfa43be8abc8409aa369"},{"author":{"_account_id":10239,"name":"Dmitry Tantsur","email":"dtantsur@protonmail.com","username":"dtantsur"},"change_message_id":"1512887f5353967c51d747f0559b6cc44aeda360","unresolved":false,"context_lines":[{"line_number":127,"context_line":"Other deployer impact"},{"line_number":128,"context_line":"---------------------"},{"line_number":129,"context_line":""},{"line_number":130,"context_line":"None"},{"line_number":131,"context_line":""},{"line_number":132,"context_line":"Developer impact"},{"line_number":133,"context_line":"----------------"}],"source_content_type":"text/x-rst","patch_set":3,"id":"3a961159_5695eb2f","line":130,"in_reply_to":"3a961159_f5d7c34c","updated":"2015-01-19 09:20:22.000000000","message":"not now at least. once we switch to oslo.service - yes, likely. will clarify.","commit_id":"d822898a1b58661ef022bfa43be8abc8409aa369"},{"author":{"_account_id":5805,"name":"Chris Krelle","email":"nobodycam@gmail.com","username":"nobodycam"},"change_message_id":"9ff01b917a17ba461f24dbc4381a5f9023a3d713","unresolved":false,"context_lines":[{"line_number":63,"context_line":"  implemented there, get rid of the work around inside"},{"line_number":64,"context_line":"  ``driver_periodic_task``, and switch to using parallel periodic tasks from"},{"line_number":65,"context_line":"  Oslo."},{"line_number":66,"context_line":""},{"line_number":67,"context_line":".. _graduation into a new oslo.service: https://review.openstack.org/#/c/142659/"},{"line_number":68,"context_line":".. _parallel periodic tasks: https://review.openstack.org/#/c/134303/"},{"line_number":69,"context_line":""}],"source_content_type":"text/x-rst","patch_set":4,"id":"1a930d6b_baab4dd5","line":66,"updated":"2015-01-20 22:47:57.000000000","message":"As this will be out side the periodic_max_workers and rpc_thread_pool_size settings are you planning on some type of conf setting that would enable and disable driver periodic tasks or limit the number tasks that could be spawned simultaneously for driver tasks?","commit_id":"bf5eb4863f6fbbe2bced62e554898e3054020145"},{"author":{"_account_id":10239,"name":"Dmitry Tantsur","email":"dtantsur@protonmail.com","username":"dtantsur"},"change_message_id":"43aa40f9e4aa4cbf55a723573ca4e2524eeb96c6","unresolved":false,"context_lines":[{"line_number":63,"context_line":"  implemented there, get rid of the work around inside"},{"line_number":64,"context_line":"  ``driver_periodic_task``, and switch to using parallel periodic tasks from"},{"line_number":65,"context_line":"  Oslo."},{"line_number":66,"context_line":""},{"line_number":67,"context_line":".. _graduation into a new oslo.service: https://review.openstack.org/#/c/142659/"},{"line_number":68,"context_line":".. _parallel periodic tasks: https://review.openstack.org/#/c/134303/"},{"line_number":69,"context_line":""}],"source_content_type":"text/x-rst","patch_set":4,"id":"1a930d6b_ccc9395c","line":66,"in_reply_to":"1a930d6b_baab4dd5","updated":"2015-01-21 11:08:51.000000000","message":"Good question. My answer would be: it will be covered by some sort of option eventually. I can\u0027t say, how it\u0027s going to look like, because my Oslo parallel tasks spec is still shaping. Once it\u0027s closer to it\u0027s final form, I\u0027m going to file a new spec here covering this question. wdyt?","commit_id":"bf5eb4863f6fbbe2bced62e554898e3054020145"},{"author":{"_account_id":5805,"name":"Chris Krelle","email":"nobodycam@gmail.com","username":"nobodycam"},"change_message_id":"41959f4882c5dd4e9384de756ed081bf50928589","unresolved":false,"context_lines":[{"line_number":63,"context_line":"  implemented there, get rid of the work around inside"},{"line_number":64,"context_line":"  ``driver_periodic_task``, and switch to using parallel periodic tasks from"},{"line_number":65,"context_line":"  Oslo."},{"line_number":66,"context_line":""},{"line_number":67,"context_line":".. _graduation into a new oslo.service: https://review.openstack.org/#/c/142659/"},{"line_number":68,"context_line":".. _parallel periodic tasks: https://review.openstack.org/#/c/134303/"},{"line_number":69,"context_line":""}],"source_content_type":"text/x-rst","patch_set":4,"id":"1a930d6b_383ca42f","line":66,"in_reply_to":"1a930d6b_ccc9395c","updated":"2015-01-21 19:10:54.000000000","message":"Dmitry, That will work for me. Could we also maybe open a wish list or low priority bug just so we can track this. My fear is we\u0027ll lose it in the cycle change.","commit_id":"bf5eb4863f6fbbe2bced62e554898e3054020145"}]}
