We have several exciting and important developments in the ACL conference that we want to highlight at the top of the second call for papers:. As our field has grown considerably over the last five years, it is important that our reviewing process scales accordingly. For the ACL conference, we are creating a new review process with a goal to better manage conflict of interest COI and better match submissions with appropriate reviewers.
Global Profile: please fill out this form in order to compute conflicts of interest and better match papers with appropriate reviewers. Theme: As quickly approaches, we felt this was a great time to reflect on the state of the field of NLP, as well its future, with a special Theme track. We hope you consider submitting a paper! Archival and Dual Submissions an be a bit tricky.Data-Driven Methods for Learning Sparse Graphical Models (November 30, 2017)
Publication Date: The official publication date is June 19,just over two weeks before the conference begins. The official publication date may affect the deadline for any patent filings related to published work. For those rare conferences whose proceedings are published in the ACL Anthology after the conference is over, the official publication date remains the first day of the conference.
Call For Papers - Main Conference. The 58th Annual Meeting of the Association for Computational Linguistics ACL invites the submission of long and short papers on substantial, original, and unpublished research in all aspects of Computational Linguistics and Natural Language Processing. The last few years have witnessed an unprecedented growth in NLP since the field began over sixty years ago. The availability of large amounts of data and computing resources have led to new models and representations and exciting results on many NLP benchmark tasks.
SOTA systems have approached human performance on several benchmark tasks. We anticipate to have a special session for this theme at the conference and a best Thematic Paper Award in addition to the traditional Best Paper Awards.
PLDI Research Papers
ACL has the goal of a broad technical program. Relevant topics for the conference include, but are not limited to, the following areas in alphabetical order :.
Long paper submissions must describe substantial, original, completed and unpublished work. Wherever appropriate, concrete evaluation and analysis should be included. Review forms will be made available prior to the deadlines.
AISTATS 2020 Organizing Committee
Long papers will be presented orally or as posters as determined by the program committee. The decisions as to which papers will be presented orally and which as poster presentations will be based on the nature rather than the quality of the work. There will be no distinction in the proceedings between long papers presented orally and as posters. Short paper submissions must describe original and unpublished work.
Please note that a short paper is not a shortened long paper. Instead short papers should have a point that can be made in a few pages.Sathiya Keerthi, Microsoft Corporation; S. Can clustering scale sublinearly with its clusters? On the challenges of learning with inference networks on sparse, high-dimensional data.
Dehan Kong, Univ. Riemannian stochastic quasi-Newton algorithm with variance reduction and its convergence analysis. High-dimensional Bayesian optimization via additive models with overlapping groups. Pawan Kumar, University of Oxford. Nearly second-order optimality of online joint detection and estimation via one-sample update schemes. Tracking the gradients using the Hessian: A new look at variance reducing stochastic methods. Watson Research Center.
Best arm identification in multi-armed bandits with delayed and partial feedback. Conditional independence testing based on a nearest-neighbor estimator of conditional mutual information. Convex optimization over intersection of simple sets: improved convergence rate guarantees via exact penalty approach. Adaptive balancing of gradient and update computation times using global geometry and approximate subproblems. Learning linear structural equation models in polynomial time and sample complexity.In other words, these tools not only help enterprises understand and fast-track their AI-enabled decision-making processes but also tell them how certain we are about our decision.
The conference will take place in Palermo, Italy in early June There is a lot of focus on Bayesian deep learning at the moment, with many researchers tackling this problem by building on top of neural networks and making the inference look more Bayesian. We use a different strategy and start with a Gaussian process, which is a well-understood Bayesian method and allows us to classify images with calibrated uncertainty and accuracy. In this paperwe use a fully Bayesian model and Deep Convolutional Gaussian processes to perform image classifications.
Using this approach, we achieve state-of-the-art performance in terms of both the accuracy of our predictions and their uncertainty quantification, which are features that are not guaranteed by other Bayesian models. This work is essential in certain tasks, especially when decision-making is linked to the outcomes of the prediction model. Our approach quantifies the uncertainty when classifying these numerical digits.
While the neural network is certain we are looking at a number 1 in the above image third column from the rightour models quantify uncertainty, accounting for the possibility of a 7. The number, in fact, is a 7, which is something that the neural network has dismissed.
Scaling Bayesian inference to high-dimensional input spaces is a hot, yet currently unresolved, research topic. While neural network techniques do scale, their uncertainty predictions are either unreliable or non-Bayesian. Our approach achieves both scalability and principled uncertainty while also providing a straightforward implementation that does not require any additional computational overheads.
In other words, the unprincipled uncertainty estimates associated with neural networks are transformed into approximately Bayesian uncertainty estimates. Predictive distributions produced by various inference methods with ReLU activation functions in single-layer neural networks on a toy regression task. Our method right obtains results close to the ground truth left when compared to other scalable techniques such as variational inference on network weights and Monte Carlo dropout.
Gaussian Processes GPs are widely recognised for their versatility. In this paperwe demonstrate how we can combine two previously separated approaches to scale inference and learning in Gaussian Processes: the sparse variational approach and state-space model-based approaches. We demonstrate the efficiency of this combination by applying our method to large datasets.
As a result, our approach is faster than classic sparse GP and state-space model approaches and can be used for a greater range of applications, including gradient-based learning using valid objectives.
We can also use the compositionality of variational inference to build more complex models with ease.
Introducing our AISTATS papers
The efficiency of the algorithm relies on statistical properties of the model and the quality of the approximation is easily understood and controlled. GPs are beautiful objects that allow us to quantify uncertainty in interpretable models unlike the main trend in deep learning. Developing better approximations and faster models is an exciting area to work in and is readily useful in the real world.
Predictions of our models blue given data points orange supported by a much smaller number of inducing states that summarize its location and curvature black curves.
In this paperwe present a novel approach to scaling up Gaussian process models to more complex problems. This is an ongoing area of research and innovation in this area allows for the wider application of Gaussian process models.
We use a divide-and-conquer approach to break the problem into a set of simpler sub-problems where each one is handled by a Gaussian process. When we put everything back together, this creates a novel mixture model, which can scale to higher-dimensional problems than previous approaches. This paper proposes a divide-and-conquer approach that allows us to learn a parsimonious and yet flexible and powerful model that can predict well.
The graphs below show the application of a naive implementation of the mixture of experts model top and our approach bottom. The colours refer to different experts used to explain the data - in the top plot we need more experts and the partitioning is not interpretable.
In the bottom plot, our approach is able to partition the data using only two experts in a very interpretable and meaningful fashion. Bayesian Image Classification with Deep Convolutional Gaussian Processes Vincent Dutordoir, Mark van der Wilk, Artem Artemev, James Hensman There is a lot of focus on Bayesian deep learning at the moment, with many researchers tackling this problem by building on top of neural networks and making the inference look more Bayesian. Gadd, Sara Wade, Alexis Boukouvalas In this paperwe present a novel approach to scaling up Gaussian process models to more complex problems.
Previous post.Conference Information. Conference Location. Call For Papers. AISTATS is an interdisciplinary gathering of researchers at the intersection of computer science, artificial intelligence, machine learning, statistics, and related areas. Since its inception inthe primary goal of AISTATS has been to broaden research in these fields by promoting the exchange of ideas among them. Papers will be selected via a rigorous double-blind peer-review process.
All accepted papers will be presented at the Conference as contributed talks or as posters and will be published in the Proceedings. Solicited topics include, but are not limited to: Models and estimation: graphical models, causality, Gaussian processes, approximate inference, kernel methods, nonparametric models, statistical and computational learning theory, manifolds and embedding, sparsity and compressed sensing, Classification, regression, density estimation, unsupervised and semi-supervised learning, clustering, topic models, Structured prediction, relational learning, logic and probability Reinforcement learning, planning, control Game theory, no-regret learning, multi-agent systems Algorithms and architectures for high-performance computation in AI and statistics Software for and applications of AI and statistics Deep learning including optimization, generalization and architectures Trustworthy learning, including learning with privacy and fairness, interpretability, and robustness For a more detailed list of keywords, please see here.
Submission Requirements for Proceedings Track: Electronic submission of papers is required. Papers may be up to 8 double-column pages in length, excluding references.
Authors may optionally submit also supplementary material. Papers for talks and posters will be treated equally in publication. Last updated by Dou Sun in Related Conferences. Related Journals. Artificial Intelligence in Medicine 3. International Conference on Cryptography and Information Security.My other interests include Ancile Projectthat introduces language-level control for data usage WPES'19 and OpenRec project that proposes modular design to modern recommender systems.
On the industry side I have development experience with large scale systems such as Amazon Alexa and OpenStack. Recovering participants' performance on their data when using federated learning with robustness and privacy techniques.
A novel platform that enables control over application's data usage with language level policies and implementing use-based privacy.
This project discusses a new trade off between privacy and fairness. We observe that training a Machine Learning model with Differential Privacy reduces accuracy on underrepresented groups.
We introduce a constrain-and-scale attack, a form of data poisoning, that can stealthily inject a backdoor into one of the participating models during a single round of Federated Learning training. This attack can avoid proposed defenses and propagate the backdoor to a global server that will distribute the compromised model to other participants. An open and modular Python framework that supports extensible and adaptable research in recommender systems.
Eugene Bagdasaryan. NovBecame a PhD Candidate. Title: "Evaluating privacy preserving techniques in machine learning". JuneDigital Life Initiative fellow AISTATS is an interdisciplinary gathering of researchers at the intersection of computer science, artificial intelligence, machine learning, statistics, and related areas.
Since its inception inthe primary goal of AISTATS has been to broaden research in these fields by promoting the exchange of ideas among them. Papers will be selected via a rigorous double-blind peer-review process. All accepted papers will be presented at the Conference as contributed talks or as posters and will be published in the Proceedings.
Models and estimation: graphical models, causality, Gaussian processes, approximate inference, kernel methods, nonparametric models, statistical and computational learning theory, manifolds and embedding, sparsity and compressed sensing, Classification, regression, density estimation, unsupervised and semi-supervised learning, clustering, topic models, Trustworthy learning, including learning with privacy and fairness, interpretability, and robustness.
Submissions are limited to 8 pages excluding references using the LaTeX style file we provide below. The number of pages containing citations alone is not limited. Note that reviewers are under no obligation to examine your supplementary material. If you have only one supplementary pdf file, please upload it as is; otherwise gather everything to the single zip file.
Formatting information including LaTeX style files is here. We do not support submission in preparation systems other than LaTeX. Please do not modify the layout given by the style file. If you have questions about the style file or its usage, please contact the publications chair. Please remove all identifying information from your submission, including author names, affiliations, and any acknowledgments.
Self-citations can present a special problem: we recommend leaving in a moderate number of self-citations for published or otherwise well-known work.
For unpublished or less-well-known work, or for large numbers of self-citations, it is up to the author's discretion how best to preserve anonymity. Possibilities include leaving out a citation altogether, including it but replacing the citation text with "removed for anonymous submission," or leaving the citation as-is; authors should choose for each citation the treatment which is least likely to reveal authorship.
Previous tech-report or workshop versions of a paper can similarly present a problem for anonymization. We suggest leaving out any identifying information for such versions, but bringing them to the attention of the program committee via the submission page.
Reviewers will be instructed that tech reports including reports on sites such as arXiv and papers in workshops without archival proceedings do not count as prior publication. Submitted manuscripts should not have been previously published in a journal or in the proceedings of a conference, and should not be under consideration for publication at another conference at any point during the AISTATS review process.
It is acceptable to have a substantially extended version of the submitted paper under consideration simultaneously for journal publication, so long as the journal version's planned publication date is in July or later, the journal submission does not interfere with AISTATS's right to publish the paper, and the situation is clearly described at the time of AISTATS submission. Please describe the situation in the appropriate box on the submission page and do not include author information in the submission itself, to avoid accidental unblinding.
As mentioned above, reviewers will be instructed that tech reports including reports on sites such as arXiv and papers in workshops without archival proceedings do not count as prior publication.
Papers for talks and posters will be treated equally in publication. Solicited topics include, but are not limited to: Models and estimation: graphical models, causality, Gaussian processes, approximate inference, kernel methods, nonparametric models, statistical and computational learning theory, manifolds and embedding, sparsity and compressed sensing, Structured prediction, relational learning, logic and probability Reinforcement learning, planning, control Game theory, no-regret learning, multi-agent systems Algorithms and architectures for high-performance computation in AI and statistics Software for and applications of AI and statistics Deep learning including optimization, generalization and architectures Trustworthy learning, including learning with privacy and fairness, interpretability, and robustness For a more detailed list of keywords, please see here.
Previous or Concurrent Submissions Submitted manuscripts should not have been previously published in a journal or in the proceedings of a conference, and should not be under consideration for publication at another conference at any point during the AISTATS review process.
Submission Deadline: See the submission deadline and other important dates here.PLDI is a premier forum for programming language research, broadly construed, including design, implementation, theory, applications, and performance. Novel system designs, thorough empirical work, well-motivated theoretical results, and new application areas are all welcome emphases in strong PLDI submissions.
Reviewers will evaluate each contribution for its accuracy, significance, originality, and clarity. Papers should identify what has been accomplished and how it relates to previous work.
Deadlines and formatting requirements, detailed below, will be strictly enforced, with extremely rare extenuating circumstances considered at the discretion of the Program Chair. Authors will have the opportunity to respond to initial reviews to correct and clarify technical concerns. Authors may contact only the Program Chair about submitted papers during and after the review process. PLDI uses double-blind reviewing.
This means that author names and affiliations must be omitted from the submission. Additionally, if the submission refers to prior work done by the authors, that reference should be made in third person.
These are firm submission requirements. Any supplementary material must also be anonymized. But there are many gray areas and trade-offs. If you have any doubts about how to interpret the double blind rules, please contact the Program Chair. Overestimate the need to contact the Program Chair for complex cases that are not fully covered by the FAQ. Authors can submit multiple times prior to the firm!
Only the last submission will be reviewed. There is no abstract deadline.
The submission site requires entering author names and affiliations, relevant topics, and potential conflicts. Addition or removal of authors after the submission deadline will need to be approved by the Program Chair as this kind of change potentially undermines the goal of eliminating conflicts during paper assignment. When submitting the paper, you will need to declare potential conflicts. Conflicts should be declared between an adviser and an advisee e.
Other conflicts include institutional conflicts, financial conflicts of interest, friends or relatives, or any recent co-authors on papers and proposals last 2 years. Please do not declare spurious conflicts: such incorrect conflicts are especially harmful if the aim is to exclude potential reviewers, so spurious conflicts can be grounds for rejection. If you are unsure about a conflict, please consult the Program Chair.