{"id":1233,"date":"2021-03-17T14:11:59","date_gmt":"2021-03-17T12:11:59","guid":{"rendered":"https:\/\/webs.uab.cat\/giq\/seminar\/from-static-to-dynamic-divergences\/"},"modified":"2021-03-17T14:11:59","modified_gmt":"2021-03-17T12:11:59","slug":"from-static-to-dynamic-divergences","status":"publish","type":"seminar","link":"https:\/\/webs.uab.cat\/giq\/seminar\/from-static-to-dynamic-divergences\/","title":{"rendered":"From Static to Dynamic Divergences"},"content":{"rendered":"<p>In this talk I will introduce an axiomatic approach for channel divergences and channel relative&nbsp;entropies that is based on three information-theoretic axioms of&nbsp;monotonicity under&nbsp;superchannels (i.e. generalized data processing inequality), additivity under tensor&nbsp;products, and normalization. I will show that these axioms are sufficient to give enough structure, leading to numerous&nbsp;properties that are applicable to all channel&nbsp;divergences. These include faithfulness, continuity, a type of triangle inequality, and&nbsp;boundedness between the min&nbsp;and max channel relative entropies. In addition, I will present&nbsp;a uniqueness theorem showing that the Kullback-Leibler divergence has only one&nbsp;extension to&nbsp;classical channels. For quantum channels, with the exception of the max&nbsp;relative entropy, this uniqueness does not hold. Instead, I will prove the optimality of&nbsp;the&nbsp;amortized channel extension of the Umegaki relative entropy, by showing that it&nbsp;provides a lower bound on all channel relative entropies that reduce to the&nbsp;Kullback-&nbsp;Leibler divergence on classical states. If time permits, I will also introduce the maximal channel extension&nbsp;of a given classical state divergence and discuss its properties.&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this talk I will introduce an axiomatic approach for channel divergences and channel relative&nbsp;entropies that is based on three information-theoretic axioms of&nbsp;monotonicity under&nbsp;superchannels (i.e. generalized data processing inequality), additivity under tensor&nbsp;products, and normalization. I will show that these axioms are sufficient to give enough structure, leading to numerous&nbsp;properties that are applicable to all channel&nbsp;divergences. [&hellip;]<\/p>\n","protected":false},"author":20,"featured_media":0,"template":"","class_list":["post-1233","seminar","type-seminar","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/webs.uab.cat\/giq\/wp-json\/wp\/v2\/seminar\/1233","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/webs.uab.cat\/giq\/wp-json\/wp\/v2\/seminar"}],"about":[{"href":"https:\/\/webs.uab.cat\/giq\/wp-json\/wp\/v2\/types\/seminar"}],"author":[{"embeddable":true,"href":"https:\/\/webs.uab.cat\/giq\/wp-json\/wp\/v2\/users\/20"}],"wp:attachment":[{"href":"https:\/\/webs.uab.cat\/giq\/wp-json\/wp\/v2\/media?parent=1233"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}