International Conference papers (Refereed):
-
Kazusato Oko, Yujin Song, Taiji Suzuki, Denny Wu:
Learning sum of diverse features: computational hardness and efficient gradient-based training for ridge combinations.
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4009-4081, 2024.
-
Dake Bu, Wei Huang, Taiji Suzuki, Ji Cheng, Qingfu Zhang, zhiqiang xu, Hau-San Wong:
Provably Neural Active Learning Succeeds via Prioritizing Perplexing Samples.
Forty-first International Conference on Machine Learning (ICML2024), accepted.
-
Yihang Chen, Fanghui Liu, Taiji Suzuki, Volkan Cevher:
High-Dimensional Kernel Methods under Covariate Shift: Data-Dependent Implicit Regularization.
Forty-first International Conference on Machine Learning (ICML2024), accepted.
-
Michael Eli Sander, Raja Giryes, Taiji Suzuki, Mathieu Blondel, Gabriel Peyré:
How do Transformers Perform In-Context Autoregressive Learning ?.
Forty-first International Conference on Machine Learning (ICML2024), accepted.
-
Michael Poli, Armin W Thomas, Eric Nguyen, Stefano Massaroli, Pragaash Ponnusamy, Björn Deiseroth, Kristian Kersting, Taiji Suzuki, Brian Hie, Stefano Ermon, Christopher Re, Ce Zhang:
Mechanistic Design and Scaling of Hybrid Architectures.
Forty-first International Conference on Machine Learning (ICML2024), accepted.
-
Juno Kim, Taiji Suzuki:
Transformers Learn Nonlinear Features In Context: Nonconvex Mean-field Dynamics on the Attention Landscape.
Forty-first International Conference on Machine Learning (ICML2024), accepted.
-
Rom Parnichkun, Stefano Massaroli, Alessandro Moro, Jimmy T.H. Smith, Ramin Hasani, Mathias Lechner, Qi An, Christopher Re, Hajime Asama, Stefano Ermon, Taiji Suzuki, Michael Poli, Atsushi Yamashita:
State-Free Inference of State-Space Models: The *Transfer Function* Approach.
Forty-first International Conference on Machine Learning (ICML2024), accepted.
-
Shokichi Takakura, Taiji Suzuki:
Mean-field Analysis on Two-layer Neural Networks from a Kernel Perspective.
Forty-first International Conference on Machine Learning (ICML2024), accepted.
-
Kakei Yamamoto, Kazusato Oko, Zhuoran Yang, Taiji Suzuki:
Mean Field Langevin Actor-Critic: Faster Convergence and Global Optimality beyond Lazy Learning.
Forty-first International Conference on Machine Learning (ICML2024), accepted.
-
Kazusato Oko, Shunta Akiyama, Denny Wu, Tomoya Murata, Taiji Suzuki:
SILVER: Single-loop variance reduction and application to federated learning.
Forty-first International Conference on Machine Learning (ICML2024), accepted.
-
Yuka Hashimoto, Sho Sonoda, Isao Ishikawa, Atsushi Nitanda, Taiji Suzuki:
Koopman-based generalization bound: New aspect for full-rank weights.
The Twelfth International Conference on Learning Representations (ICLR2024), accepted.
-
Wei Huang, Ye Shi, Zhongyi Cai, Taiji Suzuki:
Understanding Convergence and Generalization in Federated Learning through Feature Learning Theory.
The Twelfth International Conference on Learning Representations (ICLR2024), accepted.
-
Keita Suzuki, Taiji Suzuki:
Optimal criterion for feature learning of two-layer linear neural network in high dimensional interpolation regime.
The Twelfth International Conference on Learning Representations (ICLR2024), accepted.
-
Yuto Nishimura, Taiji Suzuki:
Minimax optimality of convolutional neural networks for infinite dimensional input-output problems and separation from kernel methods.
The Twelfth International Conference on Learning Representations (ICLR2024), accepted.
-
Atsushi Nitanda, Kazusato Oko, Taiji Suzuki, Denny Wu:
Anisotropy helps: improved statistical and computational complexity of the mean-field Langevin dynamics under structured data.
The Twelfth International Conference on Learning Representations (ICLR2024), accepted.
-
Juno Kim, Kakei Yamamoto, Kazusato Oko, Zhuoran Yang, Taiji Suzuki:
Symmetric Mean-field Langevin Dynamics for Distributional Minimax Problems.
The Twelfth International Conference on Learning Representations (ICLR2024), accepted.
-
Taiji Suzuki, Denny Wu, Atsushi Nitanda:
Convergence of mean-field Langevin dynamics: Time and space discretization, stochastic gradient, and variance reduction.
Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS2023), accepted.
(arXiv:2306.07221)
-
Alireza Mousavi-Hosseini · Denny Wu · Taiji Suzuki · Murat Erdogdu:
Gradient-Based Feature Learning under Structured Data.
Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS2023), accepted.
(arXiv:2309.03843)
-
Jimmy Ba, Murat Erdogdu, Taiji Suzuki, Zhichao Wang, Denny Wu:
Learning in the Presence of Low-dimensional Structure: A Spiked Random Matrix Perspective.
Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS2023), accepted.
-
Taiji Suzuki, Denny Wu, Kazusato Oko, Atsushi Nitanda:
Feature learning via mean-field Langevin dynamics: classifying sparse parities and beyond.
Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS2023), accepted.
-
Shuhei Nitta, Taiji Suzuki, Albert Rodr\'iguez Mulet, Atsushi Yaguchi, and Ryusuke Hirai:
Scalable Federated Learning for Clients with Different Input Image Sizes and Numbers of Output Categories.
Proceedings of the 22nd IEEE International Conference on Machine Learning and Applications (ICMLA2023), 2023.
-
Kazusato Oko, Shunta Akiyama, Taiji Suzuki:
Diffusion Models are Minimax Optimal Distribution Estimators.
Proceedings of the 40th International Conference on Machine Learning (ICML2023), PMLR 202:26517--26582, 2023. (presented also in ICLR2023 workshop, ME-FoMo 2023)
arXiv: arXiv:2303.01861.
-
Tomoya Murata, Taiji Suzuki:
DIFF2: Differential Private Optimization via Gradient Differences for Nonconvex Distributed Learning.
Proceedings of the 40th International Conference on Machine Learning (ICML2023), PMLR 202:25523--25548, 2023.
arXiv: arXiv:2302.03884.
-
Atsushi Nitanda, Kazusato Oko, Denny Wu, Nobuhito Takenouchi, Taiji Suzuki:
Primal and Dual Analysis of Entropic Fictitious Play for Finite-sum Problems.
Proceedings of the 40th International Conference on Machine Learning (ICML2023), PMLR 202:26266--26282, 2023.
arXiv: arXiv:2303.02957.
-
Shokichi Takakura, Taiji Suzuki:
Approximation and Estimation Ability of Transformers for Sequence-to-Sequence Functions with Infinite Dimensional Input.
Proceedings of the 40th International Conference on Machine Learning (ICML2023), PMLR 202:33416--33447, 2023.
-
Atsushi Suzuki, Atsushi Nitanda, Taiji Suzuki, Jing Wang, Feng Tian, Kenji Yamanishi.
Tight and fast generalization error bound of graph embedding in metric space. Proceedings of the 40th International Conference on Machine Learning, PMLR 202:33268--33284, 2023.
-
Hiroaki Kingetsu, Kenichi Kobayashi, Taiji Suzuki: Neural Network Module Decomposition and Recomposition with Superimposed Masks.
2023 International Joint Conference on Neural Networks (IJCNN), accepted.
-
Shunta Akiyama, Taiji Suzuki:
Excess Risk of Two-Layer ReLU Neural Networks in Teacher-Student Settings and its Superiority to Kernel Methods.
The Eleventh International Conference on Learning Representations (ICLR2023). arXiv: arXiv:2205.14818.
-
Taiji Suzuki, Atsushi Nitanda, Denny Wu:
Uniform-in-time propagation of chaos for the mean field gradient Langevin dynamics.
The Eleventh International Conference on Learning Representations (ICLR2023).
-
Kazusato Oko, Shunta Akiyama, Tomoya Murata, Taiji Suzuki:
Versatile Single-Loop Method for Gradient Estimator: First and Second Order Optimality, and its Application to Federated Learning.
Accepted by OPT2022 (14th International OPT Workshop on Optimization for Machine Learning in NeurIPS2022).
arXiv: arXiv:2209.00361.
-
Kishan Wimalawarne, Taiji Suzuki:
Layer-wise Adaptive Graph Convolution Networks Using Generalized Pagerank.
Proceedings of The 14th Asian Conference on Machine Learning (ACML2023), PMLR 189:1117-1132, 2023.
arXiv: arXiv:2108.10636.
-
Naoki Nishikawa, Taiji Suzuki, Atsushi Nitanda, Denny Wu:
Two-layer neural network on infinite dimensional data: global optimization guarantee in the mean-field regime.
Advances in Neural Information Processing Systems 35 (NeurIPS 2022), pp.32612--32623, 2022.
-
Jimmy Ba, Murat A. Erdogdu, Taiji Suzuki, Zhichao Wang, Denny Wu, Greg Yang:
High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation.
Advances in Neural Information Processing Systems 35 (NeurIPS 2022), pp.37932--37946, 2022.
arXiv: arXiv:2205.01445.
-
Yuri Kinoshita, Taiji Suzuki:
Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with Variance Reduction and its Application to Optimization. Advances in Neural Information Processing Systems 35 (NeurIPS 2022), pp.19022--19034, 2022.
arXiv: arXiv:2203.16217.
-
Tomoya Murata, Taiji Suzuki:
Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD for Communication Efficient Nonconvex Distributed Learning.
Advances in Neural Information Processing Systems 35 (NeurIPS 2022), pp.5039--5051, 2022.
arXiv: arXiv:2202.06083.
-
Chenyuan Xu, Kosuke Haruki, Taiji Suzuki, Masahiro Ozawa, Kazuki Uematsu, Ryuji Sakai:
Data-Parallel Momentum Diagonal Empirical Fisher (DP-MDEF):Adaptive Gradient Method is Affected by Hessian Approximation and Multi-Class Data.
IEEE 2022 International Conference on Machine Learning and Applications. pp.1397--1404. DOI: 10.1109/ICMLA55696.2022.00221.
-
Hiroaki Mikami, Kenji Fukumizu, Shogo Murai, Shuji Suzuki, Yuta Kikuchi, Taiji Suzuki, Shin-ichi Maeda, Kohei Hayashi:
A Scaling Law for Synthetic-to-Real Transfer: How Much Is Your Pre-training Effective?
roceedings of Machine Learning and Knowledge Discovery in Databases (ECML-PKDD 2022), Part III. Springer Lecture Notes in Computer Science, 13715, pp.477--492, 2022. DOI: https://doi.org/10.1007/978-3-031-26409-2_29.
arXiv: arXiv:2108.11018.
-
Boris Muzellec, Kanji Sato, Mathurin Massias, Taiji Suzuki:
Dimension-free convergence rates for gradient Langevin dynamics in RKHS.
Proceedings of Thirty Fifth Conference on Learning Theory (COLT2022), PMLR 178:1356-1420, 2022.
arXiv: arXiv:2003.00306.
-
Kengo Machida, Kuniaki Uto, Koichi Shinoda, Taiji Suzuki:
MSR-DARTS: Minimum Stable Rank of Differentiable Architecture Search.
2022 International Joint Conference on Neural Networks (IJCNN). DOI:10.1109/ijcnn55064.2022.9892751.
arXiv: arXiv:2009.09209.
-
Sho Okumoto and Taiji Suzuki: Learnability of convolutional neural networks for infinite dimensional input via mixed and anisotropic smoothness.
The Tenth International Conference on Learning Representations (ICLR2022), spotlight presentation.
-
Kazusato Oko, Taiji Suzuki, Atsushi Nitanda, and Denny Wu:
Particle Stochastic Dual Coordinate Ascent: Exponential convergent algorithm for mean field neural network optimization.
The Tenth International Conference on Learning Representations (ICLR2022).
-
Jimmy Ba, Murat A Erdogdu, Marzyeh Ghassemi, Shengyang Sun, Taiji Suzuki, Denny Wu, and Tianzong Zhang:
Understanding the Variance Collapse of SVGD in High Dimensions.
The Tenth International Conference on Learning Representations (ICLR2022).
-
Atsushi Nitanda, Denny Wu, Taiji Suzuki:
Convex Analysis of the Mean Field Langevin Dynamics. 25th International Conference on Artificial Intelligence and Statistics (AISTATS2022), Proceedings of Machine Learning Research, 151:9741--9757, 2022.
arXiv:2201.10469.
-
Chihiro Watanabe, Taiji Suzuki: AutoLL: Automatic Linear Layout of Graphs based on Deep Neural Network.
IEEE Symposium Series on Computational Intelligence (SSCI 2021), DOI: 10.1109/SSCI50451.2021.9659893.
arXiv:2108.02431
-
Atsushi Nitanda, Denny Wu, Taiji Suzuki:
Particle Dual Averaging: Optimization of Mean Field Neural Networks with Global Convergence Rate Analysis. Advances in Neural Information Processing Systems 34 (NeurIPS 2021), 34:19608--19621, 2021.
arXiv: arXiv:2012.15477.
-
Taiji Suzuki, Atsushi Nitanda:
Deep learning is adaptive to intrinsic dimensionality of model smoothness in anisotropic Besov space. Advances in Neural Information Processing Systems 34 (NeurIPS 2021), 34:3609--3621, 2021 (spotlight, 3% of all submissions).
arXiv: arXiv:1910.12799.
-
Stefano Massaroli, Michael Poli, Sho Sonoda, Taiji Suzuki, Jinkyoo Park, Atsushi Yamashita, Hajime Asama:
Differentiable Multiple Shooting Layers. Advances in Neural Information Processing Systems 34 (NeurIPS 2021), 34:16532--16544, 2021.
arXiv: arXiv:2106.03885.
-
Shunta Akiyama, Taiji Suzuki:
On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student Setting.
ICML2021, PMLR 139:152--162, 2021.
-
Akira Nakagawa, Keizo Kato, Taiji Suzuki:
Quantitative Understanding of VAE as a Non-linearly Scaled Isometric Embedding. ICML2021, PMLR 139:7916--7926, 2021.
-
Tomoya Murata, Taiji Suzuki:
Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning. ICML2021, PMLR 139:7872--7881, 2021.
arXiv: arXiv:2102.03198.
-
Atsushi Yaguchi, Taiji Suzuki, Shuhei Nitta, Yukinobu Sakata, Akiyuki Tanizawa:
Decomposable-Net: Scalable Low-Rank Compression for Neural Networks.
IJCAI-2021, Main Track, Pages 3249--3256, 2021. DOI: https://doi.org/10.24963/ijcai.2021/447.
arXiv:1910.13141.
-
Shingo Yashima, Atsushi Nitanda, Taiji Suzuki:
Exponential Convergence Rates of Classification Errors on Learning with SGD and Random Features.
AISTATS2021, PMLR 130:1954--1962, 2021.
arXiv:1911.05350.
-
Tomoya Murata, and Taiji Suzuki:
Gradient Descent in RKHS with Importance Labeling.
AISTATS2021, PMLR 130:1981--1989, 2021.
arXiv:2006.10925.
-
Taiji Suzuki, Shunta Akiyama:
Benefit of deep learning with non-convex noisy gradient descent: Provable excess risk bound and superiority to kernel methods.
ICLR2021 (selected as splotlight). (arXiv version: arXiv:2012.03224).
presentation slide.
-
Shun-ichi Amari, Jimmy Ba, Roger Grosse, Xuechen Li, Atsushi Nitanda, Taiji Suzuki, Denny Wu, Ji Xu:
When Does Preconditioning Help or Hurt Generalization?
ICLR2021.
(arXiv version: arXiv:2006.10732).
-
Atsushi Nitanda, and Taiji Suzuki:
Optimal Rates for Averaged Stochastic Gradient Descent under Neural Tangent Kernel Regime.
ICLR2021 (selected as oral presentation and won an outstanding paper award (8 papers out of 860 accepted papers, 2997 submitted papers)).
(arXiv version: arXiv:2006.12297).
-
Taiji Suzuki:
Generalization bound of globally optimal non-convex neural network training: Transportation map estimation by infinite dimensional Langevin dynamics. Advances in Neural Information Processing Systems 33 (NeurIPS 2020), pp.19224--19237, 2020. (selected as spotlight).
arXiv:2007.05824.
presentation slide.
-
Kenta Oono, and Taiji Suzuki:
Optimization and Generalization Analysis of Transduction through Gradient Boosting and Application to Multi-scale Graph Neural Networks.
Advances in Neural Information Processing Systems 33 (NeurIPS 2020), pp.18917--18930, 2020. arXiv:2006.08550.
-
Laurent Dillard, Yosuke Shinya, Taiji Suzuki:
Domain Adaptation Regularization for Spectral Pruning.
BMVC2020 (British Machine Vision Conference 2020), 2020.
-
Taiji Suzuki, Hiroshi Abe, Tomoya Murata, Shingo Horiuchi, Kotaro Ito, Tokuma Wachi, So Hirai, Masatoshi Yukishima, Tomoaki Nishimura:
Spectral pruning: Compressing deep neural networks via spectral analysis and its generalization error. IJCAI-PRICAI 2020, pp. 2839--2846. (long version: arXiv:1808.08558).
-
Atsushi Nitanda, Taiji Suzuki:
Functional Gradient Boosting for Learning Residual-like Networks with Statistical Guarantees.
AISTATS2020, Proceedings of Machine Learning Research, 108:2981--2991, 2020.
-
Jingling Li, Yanchao Sun, Ziyin Liu, Taiji Suzuki and Furong Huang:
Understanding of Generalization in Deep Learning via Tensor Methods.
AISTATS2020, Proceedings of Machine Learning Research, 108:504--515, 2020.
Presented also in ICML2019 Workshop "Understanding and Improving Generalization in Deep Learning."
-
Jimmy Ba, Murat Erdogdu, Taiji Suzuki, Denny Wu, Tianzong Zhang:
Generalization of Two-layer Neural Networks: An Asymptotic Viewpoint.
ICLR2020, selected as spotlight.
-
Kenta Oono and Taiji Suzuki:
Graph Neural Networks Exponentially Lose Expressive Power for Node Classification.
ICLR2020, selected as spotlight.
arXiv:1905.10947.
-
Taiji Suzuki, Hiroshi Abe, Tomoaki Nishimura:
Compression based bound for non-compressed network: unified generalization error analysis of large compressible deep neural network,
ICLR2020, selected as spotlight. (slide).
arXiv:1909.11274.
-
Yosuke Shinya, Edgar Simo-Serra, and Taiji Suzuki:
Understanding the Effects of Pre-training for Object Detectors via Eigenspectrum.
ICCV2019, Neural Architects Workshop (selected for the shortlist of strongest papers).
arXiv:1909.04021.
-
Atsushi Nitanda, Tomoya Murata, and Taiji Suzuki:
Sharp Characterization of Optimal Minibatch Size for Stochastic Finite Sum Convex Optimization.
ICDM2019, 2019 IEEE International Conference on Data Mining (ICDM), Beijing, China, 2019, pp. 488-497. Regular paper, Nominated as Best Paper Candidate.
-
Kenta Oono and Taiji Suzuki:
Approximation and Non-parametric Estimation of ResNet-type Convolutional Neural Networks.
ICML2019, Proceedings of Machine Learning Research, 97:4922--4931, 2019.
(arXiv:1903.10047).
-
Jingling Li, Yanchao Sun, Ziyin Liu, Taiji Suzuki and Furong Huang:
Understanding of Generalization in Deep Learning via Tensor Methods.
ICML2019 Workshop "Understanding and Improving Generalization in Deep Learning."
-
Heishiro Kanagawa, Hayato Kobayashi, Nobuyuki Shimizu, Yukihiro Tagami, and Taiji Suzuki:
Cross-domain Recommendation via Deep Domain Adaptation.
41st European Conference on Information Retrieval (ECIR2019), Advances in Information Retrieval, pp. 20--29, 2019.
arXiv:1803.03018.
-
Atsushi Nitanda, Taiji Suzuki:
Stochastic Gradient Descent with Exponential Convergence Rates of Expected Classification Errors.
AISTATS2019, Proceedings of Machine Learning Research, PMLR 89:1417-1426, 2019.
arXiv:1806.05438.
-
Taiji Suzuki:
Adaptivity of deep ReLU network for learning in Besov and mixed smooth Besov spaces: optimal rate and curse of dimensionality.
The 7th International Conference on Learning Representations (ICLR2019), accepted.
[The proof of Proposition 4 can be found here (provided by Satoshi Hayakawa who pointed out the technical flaw).]
(arXiv:1810.08033).
-
Kazuo Yonekura, Hitoshi Hattori, and Taiji Suzuki:
Short-term local weather forecast using dense weather station by deep neural network.
In Proceedings of 2018 IEEE International Conference on Big Data (Big Data), pp.10--13, 2018.
DOI: 10.1109/BigData.2018.8622195.
-
Tomoya Murata, and Taiji Suzuki:
Sample Efficient Stochastic Gradient Iterative Hard Thresholding Method for Stochastic Sparse Linear Regression with Limited Attribute Observation.
Advances in Neural Information Processing Systems 31 (NeurIPS2018), pp.5312--5321, 2018. arXiv:1809.01765.
-
Atsushi Yaguchi, Taiji Suzuki, Wataru Asano, Shuhei Nitta, Yukinobu Sakata, Akiyuki Tanizawa:
Adam Induces Implicit Weight Sparsity in Rectifier Neural Networks.
In Proceedings of IEEE 17th International Conference on Machine Learning and Applications (ICMLA 2018), pp.17--20, 2018.
DOI: 10.1109/ICMLA.2018.00054.
-
Atsushi Nitanda and Taiji Suzuki:
Functional gradient boosting based on residual network perception.
ICML2018, Proceedings of the 35th International Conference on Machine Learning, 80:3819--3828, 2018.
arXiv:1802.09031.
-
Atsushi Nitanda and Taiji Suzuki:
Gradient Layer: Enhancing the Convergence of Adversarial Training for Generative Models.
AISTATS2018, Proceedings of Machine Learning Research, 84:454--463, 2018.
arXiv:1801.02227.
-
Masaaki Takada, Taiji Suzuki, and Hironori Fujisawa:
Independently Interpretable Lasso: A New Regularizer for Sparse Regression with Uncorrelated Variables.
AISTATS2018, Proceedings of Machine Learning Research, 84:1008--1016, 2018.
arXiv:1711.01796.
-
Taiji Suzuki:
Fast generalization error bound of deep learning from a kernel perspective.
AISTATS2018, Proceedings of Machine Learning Research, 84:1397--1406, 2018.
arXiv:1705.10182.
-
Song Liu, Akiko Takeda, Taiji Suzuki and Kenji Fukumizu:
Trimmed Density Ratio Estimation.
NIPS2017, 4518--4528, 2017.
arXiv:1703.03216.
-
Tomoya Murata and Taiji Suzuki:
Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization.
NIPS2017, 608--617, 2017.
arXiv:1703.00439.
-
Atsushi Nitanda and Taiji Suzuki:
Stochastic Difference of Convex Algorithm and its Application to Training Deep Boltzmann Machines.
The 20th International Conference on Artificial Intelligence and Statistics (AISTATS2017),
Proceedings of Machine Learning Research, 54:470--478, 2017.
-
Taiji Suzuki, Heishiro Kanagawa, Hayato Kobayashi, Nobuyuki Shimizu, and Yukihiro Tagami:
Minimax Optimal Alternating Minimization for Kernel Nonparametric Tensor Learning.
The 30th Annual Conference on Neural Information Processing Systems (NIPS2016), pp. 3783-3791, 2016.
-
Heishiro Kanagawa, Taiji Suzuki, Hayato Kobayashi, Nobuyuki Shimizu, and Yukihiro Tagami:
Gaussian process nonparametric tensor estimator and its minimax optimality.
Proceedings of The 33rd International Conference on Machine Learning, pp. 1632–1641, 2016.
-
Song Liu, Taiji Suzuki, Masashi Sugiyama, and Kenji Fukumizu:
Structure Learning of Partitioned Markov Networks.
International Conference on Machine Learning (ICML2016),
Proceedings of The 33rd International Conference on Machine Learning, pp. 439–448, 2016.
-
Taiji Suzuki and Heishiro Kanagawa:
Bayes method for low rank tensor estimation.
International Meeting on “High-Dimensional Data Driven Science” (HD3-2015).
Dec. 14th-17th/2015, Kyoto Japan. Oral presentation.
Journal of Physics: Conference Series, 699(1), pp. 012020, 2016.
-
Taiji Suzuki:
Convergence rate of Bayesian tensor estimator and its minimax optimality.
The 32nd International Conference on Machine Learning (ICML2015),
JMLR Workshop and Conference Proceedings 37:pp. 1273--1282, 2015.
-
Satoshi Hara, Tetsuro Morimura, Toshihiro Takahashi, Hiroki Yanagisawa, Taiji Suzuki:
A Consistent Method for Graph Based Anomaly Localization.
The 18th International Conference on Artificial Intelligence and Statistics (AISTATS2015),
JMLR Workshop and Conference Proceedings 38:333--341, 2015.
-
Song Liu, Taiji Suzuki, and Masashi Sugiyama:
Support Consistency of Direct Sparse-Change Learning in Markov Networks.
The Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI2015),
2015.
(arXiv:1407.0581).
-
Taiji Suzuki:
Stochastic Dual Coordinate Ascent with Alternating Direction Method of Multipliers.
International Conference on Machine Learning (ICML2014), JMLR Workshop and Conference Proceedings 32(1):736--744, 2014.
supplementary.
(arXiv version: arXiv:1311.0622)
This paper was also presented in
OPT2013, NIPS workshop "Optimization for Machine Learning".
Source code (Matlab).
-
Ryota Tomioka, and Taiji Suzuki:
Convex Tensor Decomposition via Structured Schatten Norm Regularization.
Advances in Neural Information Processing Systems (NIPS2013), 1331--1339, 2013.
-
Taiji Suzuki:
Dual Averaging and Proximal Gradient Descent for Online Alternating Direction Multiplier Method.
International Conference on Machine Learning (ICML2013), 2013, JMLR Workshop and Conference Proceedings 28(1): 392--400, 2013.
Source code (Matlab).
-
Masashi Sugiyama, Takafumi Kanamori, Taiji Suzuki, Marthinus du Plessis, Song Liu, and Ichiro Takeuchi:
Density-Difference Estimation
.
Advances in Neural Information Processing Systems (NIPS2012), 692--700, 2012.
-
Taiji Suzuki:
PAC-Bayesian Bound for Gaussian Process Regression and Multiple Kernel Additive Model.
Conference on Learning Theory (COLT2012), 2012, JMLR Workshop and Conference Proceedings 23: 8.1--8.20, 2012.
(slide)
-
Takafumi Kanamori, Akiko Takeda and Taiji Suzuki:
A Conjugate Property between Loss Functions and Uncertainty Sets in Classification Problems.
Conference on Learning Theory (COLT2012), 2012, JMLR Workshop and Conference Proceedings 23: 29.1--29.23, 2012.
-
Taiji Suzuki and Masashi Sugiyama:
Fast Learning Rate of Multiple Kernel Learning:
Trade-off between Sparsity and Smoothness.
Fifteenth International Conference on Artificial Intelligence and Statistics (AISTATS2012),
(selected as oral presentation).
JMLR Workshop and Conference Proceedings 22: 1152--1183, 2012.
(long version, arXiv:1203.0565)
-
Taiji Suzuki:
Unifying Framework for Fast Learning Rate of Non-Sparse Multiple Kernel Learning.
Advances in Neural Information Processing Systems 24 (NIPS2011).
pp.1575--1583.
(long version, arXiv:1111.3781)
-
Ryota Tomioka, Taiji Suzuki, Kohei Hayashi and Hisashi Kashima:
Statistical Performance of Convex Tensor Decomposition.
Advances in Neural Information Processing Systems 24 (NIPS2011).
pp.972--980.
-
Makoto Yamada, Taiji Suzuki, Takafumi Kanamori, Hirotaka Hachiya and Masashi Sugiyama:
Relative Density-Ratio Estimation for Robust Distribution Comparison.
Advances in Neural Information Processing Systems 24 (NIPS2011).
pp.594--602.
(long version, arXiv:1106.4729)
-
Ryota Tomioka and Taiji Suzuki:
Regularization Strategies and Empirical Bayesian Learning for MKL.
NIPS2010 Workshop: New Directions in Multiple Kernel Learning, 2010.
(arXiv:1011.3090)
-
Ryota Tomioka, Taiji Suzuki, Masashi Sugiyama and Hisashi Kashima:
A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices.
27th International Conference on Machine Learning International (ICML2010).
pp.1087--1094.
(pdf)
-
Taiji Suzuki and Masashi Sugiyama:
Sufficient dimension reduction via squared-loss mutual information estimation.
Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS2010).
JMLR Workshop and Conference Proceedings 9: pp.781--788, 2010.
(pdf)
-
Masashi Sugiyama, Ichiro Takeuchi, Takafumi Kanamori, Taiji Suzuki, Hirotaka Hachiya, and Daisuke Okanohara:
Conditional density estimation via least-squares density ratio estimation.
Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS2010).
JMLR Workshop and Conference Proceedings 9: pp.804--811, 2010.
(pdf)
-
Masashi Sugiyama, Satoshi Hara, Paul von Bunau, Taiji Suzuki, Takafumi Kanamori, and Motoaki Kawanabe:
Direct density ratio estimation with dimensionality reduction.
2010 SIAM International Conference on Data Mining (SDM2010).
pp.595--606.
(pdf)
-
Ryota Tomioka and Taiji Suzuki:
Sparsity-accuracy trade-off in MKL.
NIPS 2009 Workshop :: Understanding Multiple Kernel Learning Methods, Whistler, Canada.
(T. Suzuki presented) (arXiv:1001.2615)
-
Ryota Tomioka, Taiji Suzuki, and Masashi Sugiyama:
Super-Linear Convergence of Dual Augmented Lagrangian Algorithm for Sparse Learning.
NIPS 2009 Workshop :: Optimization for Machine Learning, Whistler, Canada.
(arXiv:0911.4046)
-
Taiji Suzuki, Masashi Sugiyama, and Toshiyuki Tanaka:
Mutual information approximation via maximum likelihood estimation of density ratio.
2009 IEEE International Symposium on Information Theory (ISIT2009). pp.463--467, Seoul, Korea, 2009.
-
Taiji Suzuki, and Masashi Sugiyama:
Estimating Squared-loss Mutual Information for Independent Component Analysis.
ICA 2009. Paraty, Brazil, 2009. Lecture Notes in Computer Science, Vol. 5441, pp.130--137, Berlin, Springer, 2009.
-
Taiji Suzuki, Masashi Sugiyama, Takafumi Kanamori and Jun Sese:
Mutual information estimation reveals global associations between stimuli and
biological processes.
In Proceedings of the seventh asia pacific bioinformatics conference (APBC 2009). Beijing, China, 2009.
-
Taiji Suzuki, Masashi Sugiyama, Jun Sese, and Takafumi Kanamori:
Approximating mutual information by maximum likelihood density
ratio estimation.
In Proceedings of the 3rd workshop on new challenges for feature selection in data mining and knowledge discovery (FSDM2008), JMLR workshop and conference proceedings, Vol. 4, pp.5--20, 2008.
-
Taiji Suzuki, Masashi Sugiyama, Jun Sese, and Takafumi Kanamori:
A least-squares approach to mutual information estimation with application
in variable selection.
In Proceedings of the 3rd workshop on new challenges for feature selection in data mining and knowledge discovery (FSDM2008).
Antwerp, Belgium, 2008.
-
Taiji Suzuki, Takamasa Koshizen, Kazuyuki Aihara and Hiroshi Tsujino:
Learning to estimate user interest utilizing the variational Bayes
estimator.
Intelligent Systems Design and Applications (ISDA) 2005, 94--99.
Wroclaw, Poland, September 2005.
-
Tetsuya Hoya, Gen Hori, Havagim Bakardjian, Tomoaki Nishimura, Taiji
Suzuki, Yoichi Miyawaki, Arao Funase, and Jianting Cao:
Classification of Single Trial EEG Signals by a Combined Principal +
Independent Component Analysis and Probabilistic Neural Network Approach.
Proc. ICA2003, pp. 197-202.
Nara, Japan, January 2003.
|